ACL-OCL / Base_JSON /prefixP /json /P18 /P18-1032.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P18-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:39:17.701990Z"
},
"title": "Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Poerner",
"suffix": "",
"affiliation": {},
"email": "poerner@cis.lmu.de"
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The behavior of deep neural networks (DNNs) is hard to understand. This makes it necessary to explore post hoc explanation methods. We conduct the first comprehensive evaluation of explanation methods for NLP. To this end, we design two novel evaluation paradigms that cover two important classes of NLP problems: small context and large context problems. Both paradigms require no manual annotation and are therefore broadly applicable. We also introduce LIMSSE, an explanation method inspired by LIME that is designed for NLP. We show empirically that LIMSSE, LRP and DeepLIFT are the most effective explanation methods and recommend them for explaining DNNs in NLP.",
"pdf_parse": {
"paper_id": "P18-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "The behavior of deep neural networks (DNNs) is hard to understand. This makes it necessary to explore post hoc explanation methods. We conduct the first comprehensive evaluation of explanation methods for NLP. To this end, we design two novel evaluation paradigms that cover two important classes of NLP problems: small context and large context problems. Both paradigms require no manual annotation and are therefore broadly applicable. We also introduce LIMSSE, an explanation method inspired by LIME that is designed for NLP. We show empirically that LIMSSE, LRP and DeepLIFT are the most effective explanation methods and recommend them for explaining DNNs in NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "DNNs are complex models that combine linear transformations with different types of nonlinearities. If the model is deep, i.e., has many layers, then its behavior during training and inference is notoriously hard to understand. This is a problem for both scientific methodology and real-world deployment. Scientific methodology demands that we understand our models. In the real world, a decision (e.g., \"your blog post is offensive and has been removed\") by itself is often insufficient; in addition, an explanation of the decision may be required (e.g., \"our system flagged the following words as offensive\"). The European Union plans to mandate that intelligent systems used for sensitive applications provide such explanations (European General Data Protection Regulation, expected 2018, cf. Goodman and Flaxman (2016)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A number of post hoc explanation methods for DNNs have been proposed. Due to the complexity of the DNNs they explain, these methods are necessarily approximations and come with their own sources of error. At this point, it is not clear which of these methods to use when reliable explanations for a specific DNN architecture are needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Definitions. (i) A task method solves an NLP problem, e.g., a GRU that predicts sentiment. (ii) An explanation method explains the behavior of a task method on a specific input. For our purpose, it is a function \u03c6(t, k, X) that assigns real-valued relevance scores for a target class k (e.g., positive) to positions t in an input text X (e.g., \"great food\"). For this example, an explanation method might assign: \u03c6(1, k, X) > \u03c6(2, k, X).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(iii) An (explanation) evaluation paradigm quantitatively evaluates explanation methods for a task method, e.g., by assigning them accuracies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions. (i) We present novel evaluation paradigms for explanation methods for two classes of common NLP tasks (see \u00a72). Crucially, neither paradigm requires manual annotations and our methodology is therefore broadly applicable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(ii) Using these paradigms, we perform a comprehensive evaluation of explanation methods for NLP ( \u00a73). We cover the most important classes of task methods, RNNs and CNNs, as well as the recently proposed Quasi-RNNs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(iii) We introduce LIMSSE ( \u00a73.6), an explanation method inspired by LIME (Ribeiro et al., tasks sentiment analysis, morphological prediction, . . lrp From : kolstad @ cae.wisc.edu ( Joel Kolstad ) Subject : Re : Can Radio Freq . Be Used To Measure Distance ? [...] What is the difference between vertical and horizontal ? Gravity ? Does n't gravity pull down the photons and cause a doppler shift or something ? ( Just kidding ! )",
"cite_spans": [
{
"start": 69,
"end": 96,
"text": "LIME (Ribeiro et al., tasks",
"ref_id": null
},
{
"start": 260,
"end": 265,
"text": "[...]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "grad L2 1p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "If you find faith to be honest , show me how . David The whole denominational mindset only causes more problems , sadly . ( See section 7 for details . ) Thank you . 'The Armenians just shot and shot . Maybe coz they 're 'quality' cars ; -) 200 posts/day . [...] limsse ms s If you find faith to be honest , show me how . David The whole denominational mindset only causes more problems , sadly . ( See section 7 for details . ) Thank you . 'The Armenians just shot and shot . Maybe coz they 're 'quality' cars ; -) 200 posts/day . [...] Figure 1: Top: sci.electronics post (not hybrid). Underlined: Manual relevance ground truth. Green: evidence for sci.electronics. Task method: CNN. Bottom: hybrid newsgroup post, classified talk.politics.mideast. Green: evidence for talk.politics.mideast. Underlined: talk.politics.mideast fragment. Task method: QGRU. Italics: OOV. Bold: rmax position. See supplementary for full texts.",
"cite_spans": [
{
"start": 257,
"end": 262,
"text": "[...]",
"ref_id": null
},
{
"start": 532,
"end": 537,
"text": "[...]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2016) that is designed for word-order sensitive task methods (e.g., RNNs, CNNs). We show empirically that LIMSSE, LRP (Bach et al., 2015) and DeepLIFT (Shrikumar et al., 2017) are the most effective explanation methods ( \u00a74): LRP and DeepLIFT are the most consistent methods, while LIMSSE wins the hybrid document experiment.",
"cite_spans": [
{
"start": 118,
"end": 137,
"text": "(Bach et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 151,
"end": 175,
"text": "(Shrikumar et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we introduce two novel evaluation paradigms for explanation methods on two types of common NLP tasks, small context tasks and large context tasks. Small context tasks are defined as those that can be solved by finding short, self-contained indicators, such as words and phrases, and weighing them up (i.e., tasks where CNNs with pooling can be expected to perform well). We design the hybrid document paradigm for evaluating explanation methods on small context tasks. Large context tasks require the correct handling of long-distance dependencies, such as subject-verb agreement. 1 We design the morphosyntactic agreement paradigm for evaluating explanation methods on large context tasks. We could also use human judgments for evaluation. While we use Mohseni and Ragan (2018)'s manual relevance benchmark for comparison, there are two issues with it: (i) Due to the cost of human labor, it is limited in size and domain. (ii) More importantly, a good explanation method should not reflect what humans attend to, but what task methods attend to. For instance, the family name \"Kolstad\" has 11 out of its 13 appearances in the 20 newsgroups corpus in sci.electronics posts. Thus, task methods probably learn it as a sci.electronics indicator. Indeed, the explanation method in Fig 1 (top) marks \"Kolstad\" as relevant, but the human annotator does not.",
"cite_spans": [
{
"start": 598,
"end": 599,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1295,
"end": 1306,
"text": "Fig 1 (top)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation paradigms",
"sec_num": "2"
},
{
"text": "Given a collection of documents, hybrid documents are created by randomly concatenating document fragments. We assume that, on average, the most relevant input for a class k in a hybrid document is located in a fragment that stems from a document with gold label k. Hence, an explanation method succeeds if it places maximal relevance for k inside the correct fragment. Formally, let x t be a word inside hybrid document X that originates from a document X with gold label y(X ). x t 's gold label y(X, t) is set to y(X ). Let f (X) be the class assigned to the hybrid document by a task method, and let \u03c6 be an explanation method as defined above. Let rmax(X, \u03c6) denote the position of the maximally relevant word in X for the predicted class f (X). If this maximally relevant word comes from a document with the correct gold label, the explanation method is awarded a hit:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Small context: Hybrid document paradigm",
"sec_num": "2.1"
},
{
"text": "hit(\u03c6, X) = I[y X, rmax(X, \u03c6) = f (X)] (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Small context: Hybrid document paradigm",
"sec_num": "2.1"
},
{
"text": "where I[P ] is 1 if P is true and 0 otherwise. In Fig 1 (bottom) , the explanation method grad L2 1p places rmax outside the correct (underlined) fragment. Therefore, it does not get a hit point, while limsse ms s does. The pointing game accuracy of an explanation method is calculated as its total number of hit points divided by the number of possible hit points. This is a form of the pointing game paradigm from computer vision predicts the agreeing feature in w should pay attention to v. For example, in the sentence \"the children with the telescope are home\", the number of the verb (plural for \"are\") can be predicted from the subject (\"children\") without looking at the verb. If the language allows for v and w to be far apart (Fig 3, top) , successful task methods have to be able to handle large contexts. Linzen et al. (2016) show that English verb number can be predicted by a unidirectional LSTM with accuracy > 99%, based on left context alone. When a task method predicts the correct number, we expect successful explanation methods to place maximal relevance on the subject:",
"cite_spans": [
{
"start": 817,
"end": 837,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 50,
"end": 64,
"text": "Fig 1 (bottom)",
"ref_id": null
},
{
"start": 736,
"end": 748,
"text": "(Fig 3, top)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Small context: Hybrid document paradigm",
"sec_num": "2.1"
},
{
"text": "hit target (\u03c6, X) = I[rmax(X, \u03c6) = target(X)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Small context: Hybrid document paradigm",
"sec_num": "2.1"
},
{
"text": "where target(X) is the location of the subject, and rmax is calculated as above. Regardless of whether the prediction is correct, we expect rmax to fall onto a noun that has the predicted number:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Small context: Hybrid document paradigm",
"sec_num": "2.1"
},
{
"text": "hit feat (\u03c6, X) = I[feat X, rmax(X, \u03c6) = f (X)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Small context: Hybrid document paradigm",
"sec_num": "2.1"
},
{
"text": "where feat(X, t) is the morphological feature (here: number) of x t . In Fig 2, rmax on \"link\" gives a hit target point (and a hit feat point), rmax on \"editor\" gives a hit feat point. grad L2 s does not get any points as \"history\" is not a plural noun.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 79,
"text": "Fig 2,",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Small context: Hybrid document paradigm",
"sec_num": "2.1"
},
{
"text": "Labels for this task can be automatically generated using part-of-speech taggers and parsers, which are available for many languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Small context: Hybrid document paradigm",
"sec_num": "2.1"
},
{
"text": "In this section, we define the explanation methods that will be evaluated. For our purpose, explanation methods produce word relevance scores \u03c6(t, k, X), which are specific to a given class k and a given input X. \u03c6(t, k, X) > \u03c6(t , k, X) means that x t contributed more than x t to the task method's (potential) decision to classify X as k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation methods",
"sec_num": "3"
},
{
"text": "Gradient-based explanation methods approximate the contribution of some DNN input i to some output o with o's gradient with respect to i (Simonyan et al., 2014) . In the following, we consider two output functions o(k, X), the unnormalized class score s(k, X) and the class probability p(k|X):",
"cite_spans": [
{
"start": 137,
"end": 160,
"text": "(Simonyan et al., 2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(k, X) = w k \u2022 h(X) + b k (2) p(k|X) = exp s(k, X) K k =1 exp s(k , X)",
"eq_num": "(3)"
}
],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "where k is the target class, h(X) the document representation (e.g., an RNN's final hidden layer),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "w k (resp. b k ) k's weight vector (resp. bias).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "The simple gradient of o(k, X) w.r.t. i is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "grad 1 (i, k, X) = \u2202o(k, X) \u2202i (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "grad 1 underestimates the importance of inputs that saturate a nonlinearity (Shrikumar et al., 2017) . To address this, Sundararajan et al. 2017integrate over all gradients on a linear interpolation \u03b1 \u2208 [0, 1] between a baseline inputX (here: all-zero embeddings) and X:",
"cite_spans": [
{
"start": 76,
"end": 100,
"text": "(Shrikumar et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "grad (i, k, X) = 1 \u03b1=0 \u2202o(k,X+\u03b1(X\u2212X)) \u2202i \u2202\u03b1 \u2248 1 M M m=1 \u2202o(k,X+ m M (X\u2212X)) \u2202i (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "where M is a big enough constant (here: 50). In NLP, symbolic inputs (e.g., words) are often represented as one-hot vectors x t \u2208 {1, 0} |V | and embedded via a real-valued matrix: e t = M x t . Gradients are computed with respect to individual entries of E = [ e 1 . . . e |X| ]. and Hechtlinger (2016) use the L2 norm to reduce vectors of gradients to single values:",
"cite_spans": [
{
"start": 285,
"end": 303,
"text": "Hechtlinger (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "\u03c6 grad L2 (t, k, X) = ||grad( e t , k, E)|| (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "where grad( e t , k, E) is a vector of elementwise gradients w.r.t. e t . Denil et al. (2015) use the dot product of the gradient vector and the embedding 2 , i.e., the gradient of the \"hot\" entry in x t :",
"cite_spans": [
{
"start": 74,
"end": 93,
"text": "Denil et al. (2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "\u03c6 grad dot (t, k, X) = e t \u2022 grad( e t , k, E) (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "We use \"grad 1 \" for Eq 4, \"grad \" for Eq 5, \" p \" for Eq 3, \" s \" for Eq 2, \"L2\" for Eq 6 and \"dot\" for Eq 7. This gives us eight explanation methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "grad L2 1s , grad L2 1p , grad dot 1s , grad dot 1p , grad L2 s , grad L2 p , grad dot s , grad dot p .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient-based explanation methods",
"sec_num": "3.1"
},
{
"text": "Layer-wise relevance propagation (LRP) is a backpropagation-based explanation method developed for fully connected neural networks and CNNs (Bach et al., 2015) and later extended to LSTMs (Arras et al., 2017b) . In this paper, we use Epsilon LRP (Eq 58, Bach et al. 2015). Remember that the activation of neuron j, a j , is the sum of weighted upstream activations, i a i w i,j , plus bias b j , squeezed through some nonlinearity. We denote the pre-nonlinearity activation of j as a j . The relevance of j, R(j), is distributed to upstream neurons i proportionally to the contribution that i makes to a j in the forward pass:",
"cite_spans": [
{
"start": 140,
"end": 159,
"text": "(Bach et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 188,
"end": 209,
"text": "(Arras et al., 2017b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise relevance propagation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R(i) = j R(j) a i w i,j a j + esign(a j )",
"eq_num": "(8)"
}
],
"section": "Layer-wise relevance propagation",
"sec_num": "3.2"
},
{
"text": "This ensures that relevance is conserved between layers, with the exception of relevance attributed to b j . To prevent numerical instabilities, esign(a ) returns \u2212 if a < 0 and otherwise. We set = .001. The full algorithm is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise relevance propagation",
"sec_num": "3.2"
},
{
"text": "R(L k ) = s(k, X)I[k = k]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise relevance propagation",
"sec_num": "3.2"
},
{
"text": "... recursive application of Eq 8 ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise relevance propagation",
"sec_num": "3.2"
},
{
"text": "\u03c6 lrp (t, k, X) = dim( et) j=1 R(e t,j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise relevance propagation",
"sec_num": "3.2"
},
{
"text": "where L is the final layer, k the target class and R(e t,j ) the relevance of dimension j in the t'th embedding vector. For \u2192 0 and provided that all nonlinearities up to the unnormalized class score are relu, Epsilon LRP is equivalent to the product of input and raw score gradient (here: grad dot 1s ) (Kindermans et al., 2016) . In our experiments, the second requirement holds only for CNNs.",
"cite_spans": [
{
"start": 304,
"end": 329,
"text": "(Kindermans et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise relevance propagation",
"sec_num": "3.2"
},
{
"text": "Experiments by Ancona et al. (2017) (see \u00a76) suggest that LRP does not work well for LSTMs if all neurons -including gates -participate in backpropagation. We therefore use Arras et al. (2017b)'s modification and treat sigmoid-activated gates as time step-specific weights rather than neurons. For instance, the relevance of LSTM candidate vector g t is calculated from memory vector c t and input gate vector i t as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise relevance propagation",
"sec_num": "3.2"
},
{
"text": "R(g t,d ) = R(c t,d ) g t,d \u2022 i t,d c t,d + esign(c t,d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise relevance propagation",
"sec_num": "3.2"
},
{
"text": "This is equivalent to applying Eq 8 while treating i t as a diagonal weight matrix. The gate neurons in i t do not receive any relevance themselves. See supplementary material for formal definitions of Epsilon LRP for different architectures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Layer-wise relevance propagation",
"sec_num": "3.2"
},
{
"text": "DeepLIFT (Shrikumar et al., 2017) is another backpropagation-based explanation method. Unlike LRP, it does not explain s(k, X), but s(k, X)\u2212s(k,X), whereX is some baseline input (here: all-zero embeddings). Following Ancona et al. 2018) (Eq 4, we use this backpropagation rule:",
"cite_spans": [
{
"start": 9,
"end": 33,
"text": "(Shrikumar et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DeepLIFT",
"sec_num": "3.3"
},
{
"text": "R(i) = j R(j) a i w i,j \u2212\u0101 i w i,j a j \u2212\u0101 j + esign(a j \u2212\u0101 j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeepLIFT",
"sec_num": "3.3"
},
{
"text": "where\u0101 refers to the forward pass of the baseline. Note that the original method has a different mechanism for avoiding small denominators; we use esign for compatibility with LRP. The DeepLIFT algorithm is started with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeepLIFT",
"sec_num": "3.3"
},
{
"text": "R(L k ) = s(k, X)\u2212s(k,X) I[k = k].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeepLIFT",
"sec_num": "3.3"
},
{
"text": "On gated (Q)RNNs, we proceed analogous to LRP and treat gates as weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DeepLIFT",
"sec_num": "3.3"
},
{
"text": "The cell decomposition explanation method for LSTMs (Murdoch and Szlam, 2017) decomposes the unnormalized class score s(k, X) (Eq 2) into additive contributions. For every time step t, we compute how much of c t \"survives\" until the final step T and contributes to s(k, X). This is achieved by applying all future forget gates f , the final tanh nonlinearity, the final output gate o T , as well as the class weights of k to c t . We call this quantity \"net load of t for class k\":",
"cite_spans": [
{
"start": 52,
"end": 77,
"text": "(Murdoch and Szlam, 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cell decomposition for gated RNNs",
"sec_num": "3.4"
},
{
"text": "nl(t, k, X) = w k \u2022 o T tanh ( T j=t+1 f j ) c t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cell decomposition for gated RNNs",
"sec_num": "3.4"
},
{
"text": "where and are applied elementwise. The relevance of t is its gain in net load relative to t \u2212 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cell decomposition for gated RNNs",
"sec_num": "3.4"
},
{
"text": "\u03c6 decomp (t, k, X) = nl(t, k, X) \u2212 nl(t \u2212 1, k, X).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cell decomposition for gated RNNs",
"sec_num": "3.4"
},
{
"text": "For GRU, we change the definition of net load:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cell decomposition for gated RNNs",
"sec_num": "3.4"
},
{
"text": "nl(t, k, X) = w k \u2022 ( T j=t+1 z j ) h t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cell decomposition for gated RNNs",
"sec_num": "3.4"
},
{
"text": "where z are GRU update gates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cell decomposition for gated RNNs",
"sec_num": "3.4"
},
{
"text": "Input perturbation methods assume that the removal or masking of relevant inputs changes the output (Zeiler and Fergus, 2014). Omissionbased methods remove inputs completely (K\u00e1d\u00e1r et al., 2017) , while occlusion-based methods replace them with a baseline (Li et al., 2016b) . In computer vision, perturbations are usually applied to patches, as neighboring pixels tend to correlate (Zintgraf et al., 2017) . To calculate the omit N (resp. occ N ) relevance of word x t , we delete (resp. occlude), one at a time, all N -grams that contain x t , and average the change in the unnormalized class score from Eq 2:",
"cite_spans": [
{
"start": 174,
"end": 194,
"text": "(K\u00e1d\u00e1r et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 256,
"end": 274,
"text": "(Li et al., 2016b)",
"ref_id": "BIBREF22"
},
{
"start": 383,
"end": 406,
"text": "(Zintgraf et al., 2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Input perturbation methods",
"sec_num": "3.5"
},
{
"text": "\u03c6 [omit|occ] N (t, k, X) = N j=1 s(k, [ e 1 . . . e |X| ]) \u2212s(k, [ e 1 . . . e t\u2212N \u22121+j ] \u0112 [ e t+j . . . e |X| ]) 1 N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input perturbation methods",
"sec_num": "3.5"
},
{
"text": "where e t are embedding vectors, denotes concatenation and\u0112 is either a sequence of length zero (\u03c6 omit ) or a sequence of N baseline (here: all-zero) embedding vectors (\u03c6 occ ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input perturbation methods",
"sec_num": "3.5"
},
{
"text": "Local Interpretable Model-agnostic Explanations (LIME) (Ribeiro et al., 2016) is a framework for explaining predictions of complex classifiers. LIME approximates the behavior of classifier f in the neighborhood of input X with an interpretable (here: linear) model. The interpretable model is trained on samples Z 1 . . . Z N (here: N = 3000), which are randomly drawn from X, with \"gold labels\" f (Z 1 ) . . . f (Z N ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSSE: LIME for NLP",
"sec_num": "3.6"
},
{
"text": "Since RNNs and CNNs respect word order, we cannot use the bag of words sampling method from the original description of LIME. Instead, we introduce Local Interpretable Model-agnostic Substring-based Explanations (LIMSSE). LIMSSE uniformly samples a length l n (here: 1 \u2264 l n \u2264 6) and a starting point s n , which define the substring",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSSE: LIME for NLP",
"sec_num": "3.6"
},
{
"text": "Z n = [ x sn . . . x sn+ln\u22121 ]. To the linear model, Z n is rep- resented by a binary vector z n \u2208 {0, 1} |X| , where z n,t = I[s n \u2264 t < s n + l n ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSSE: LIME for NLP",
"sec_num": "3.6"
},
{
"text": "We learn a linear weight vector\u02c6 v k \u2208 R |X| , whose entries are word relevances for k, i.e., \u03c6 limsse (t, k, X) =v k,t . To optimize it, we experiment with three loss functions. The first, which we will refer to as limsse bb , assumes that our DNN is a total black box that delivers only a classification:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSSE: LIME for NLP",
"sec_num": "3.6"
},
{
"text": "v k = argmin v k n \u2212 log \u03c3( z n \u2022 v k ) I[f (Z n ) = k] + log 1 \u2212 \u03c3( z n \u2022 v k ) I[f (Z n ) = k]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSSE: LIME for NLP",
"sec_num": "3.6"
},
{
"text": "where f (Z n ) = argmax k p(k |Z n ) . The black box approach is maximally general, but insensitive to the magnitude of evidence found in Z n . Hence, we also test magnitude-sensitive loss functions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSSE: LIME for NLP",
"sec_num": "3.6"
},
{
"text": "v k = argmin v k n z n \u2022 v k \u2212 o(k, Z n ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSSE: LIME for NLP",
"sec_num": "3.6"
},
{
"text": "where o(k, Z n ) is one of s(k, Z n ) or p(k|Z n ). We refer to these as limsse ms s and limsse ms p .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSSE: LIME for NLP",
"sec_num": "3.6"
},
{
"text": "For the hybrid document experiment, we use the 20 newsgroups corpus (topic classification) (Lang, 1995) and reviews from the 10th yelp dataset challenge (binary sentiment analysis) 3 . We train five DNNs per corpus: a bidirectional GRU (Cho et al., 2014) , a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) , a 1D CNN with global max pooling (Collobert et al., 2011) , a bidirectional Quasi-GRU (QGRU), and a bidirectional Quasi-LSTM (QLSTM). The Quasi-RNNs are 1D CNNs with a feature-wise gated recursive pooling layer (Bradbury et al., 2017) . Word embeddings are R 300 and initialized with pre-trained GloVe embeddings (Pennington et al., 2014) 4 . The main layer has a hidden size of 150 (bidirectional architectures: 75 dimensions per direction). For the QRNNs and CNN, we use a kernel width of 5. In all five architectures, the resulting document representation is projected to 20 (resp. two) dimensions using a fully connected layer, followed by a softmax. See supplementary material for details on training and regularization. After training, we sentence-tokenize the test sets, shuffle the sentences, concatenate ten sentences at a time and classify the resulting hybrid documents. Documents that are assigned a class that is not the gold label of at least one constituent word are discarded (yelp: < 0.1%; 20 newsgroups: 14% -20%). On the remaining documents, we use the explanation methods from \u00a73 to find the maximally relevant word for each prediction. The random baseline samples the maximally relevant word from a uniform distribution.",
"cite_spans": [
{
"start": 91,
"end": 103,
"text": "(Lang, 1995)",
"ref_id": "BIBREF19"
},
{
"start": 236,
"end": 254,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 278,
"end": 312,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF16"
},
{
"start": 348,
"end": 372,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 526,
"end": 549,
"text": "(Bradbury et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 628,
"end": 653,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid document experiment",
"sec_num": "4.1"
},
{
"text": "For reference, we also evaluate on a human judgment benchmark (Mohseni and Ragan (2018), 84 .74 .35 .43 .39 .65 .64 .65 .63 .65 .73 .73 .72 .73 .36 .35 .39 .43 decomp .79 .88 .92 .88 -.75 .79 .77 .80 -.54 .36 .72 .51 -.84 .87 .86 .90 .90 .93 .92 .96 .52 .58 .57 .63 lrp .92 .87 .91 .84 .86 .82 .83 .79 .85 .89 .85 .72 .74 .81 .79 .90 .90 .86 .91 .95 .95 .91 .95 .58 .60 .52 .63 deeplift .91 .89 .94 .85 .87 .82 .83 .78 .84 .89 .84 .72 .70 .81 .80 .91 .90 .85 .91 .95 .95 .90 .95 .59 .59 .52 .63 limsse bb .81 .82 .83 .84 .78 .78 .81 .78 .80 .84 .52 .53 .53 .54 .57 .43 .41 .44 .42 .54 .51 .56 .52 .39 .43 .42 .41 limsse ms s . 94 .94 .93 .93 .91 .85 .87 .83 .86 .89 .85 .84 .76 .84 .82 .62 .62 .67 .63 .75 .74 .82 .75 .52 .53 .55 .53 limsse ms p .87 .88 .85 .86 .94 .85 .86 .83 .86 .90 .81 .80 .74 .76 .76 .62 .62 .67 .63 .75 .74 .82 .75 .51 .53 .55 .53 random .69 .67 .70 .69 .66 .20 .19 .22 .22 .21 .09 .09 .06 .06 .08 .27 .27 .27 .27 .33 .33 .33 .33 .12 .13 .12 188 documents from the 20 newsgroups test set (classes sci.med and sci.electronics), with one manually created list of relevant words per document. We discard documents that are incorrectly classified (20% -27%) and define: hit(\u03c6, X) = I[rmax(X, \u03c6) \u2208 gt(X)], where gt(X) is the manual ground truth.",
"cite_spans": [
{
"start": 89,
"end": 608,
"text": "84 .74 .35 .43 .39 .65 .64 .65 .63 .65 .73 .73 .72 .73 .36 .35 .39 .43 decomp .79 .88 .92 .88 -.75 .79 .77 .80 -.54 .36 .72 .51 -.84 .87 .86 .90 .90 .93 .92 .96 .52 .58 .57 .63 lrp .92 .87 .91 .84 .86 .82 .83 .79 .85 .89 .85 .72 .74 .81 .79 .90 .90 .86 .91 .95 .95 .91 .95 .58 .60 .52 .63 deeplift .91 .89 .94 .85 .87 .82 .83 .78 .84 .89 .84 .72 .70 .81 .80 .91 .90 .85 .91 .95 .95 .90 .95 .59 .59 .52 .63 limsse bb .81 .82 .83 .84 .78 .78 .81 .78 .80 .84 .52 .53 .53 .54 .57 .43 .41 .44 .42 .54 .51 .56 .52 .39 .43 .42",
"ref_id": null
},
{
"start": 627,
"end": 964,
"text": "94 .94 .93 .93 .91 .85 .87 .83 .86 .89 .85 .84 .76 .84 .82 .62 .62 .67 .63 .75 .74 .82 .75 .52 .53 .55 .53 limsse ms p .87 .88 .85 .86 .94 .85 .86 .83 .86 .90 .81 .80 .74 .76 .76 .62 .62 .67 .63 .75 .74 .82 .75 .51 .53 .55 .53 random .69 .67 .70 .69 .66 .20 .19 .22 .22 .21 .09 .09 .06 .06 .08 .27 .27 .27 .27 .33 .33 .33 .33 .12 .13 .12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid document experiment",
"sec_num": "4.1"
},
{
"text": "For the morphosyntactic agreement experiment, we use automatically annotated English Wikipedia sentences by Linzen et al. (2016) 5 . For our purpose, a sample consists of: all words preceding the verb:",
"cite_spans": [
{
"start": 108,
"end": 130,
"text": "Linzen et al. (2016) 5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic agreement experiment",
"sec_num": "4.2"
},
{
"text": "X = [x 1 \u2022 \u2022 \u2022 x T ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic agreement experiment",
"sec_num": "4.2"
},
{
"text": "; part-of-speech (POS) tags: pos(X, t) \u2208 {VBZ, VBP, NN, NNS, . . .}; and the position of the subject:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic agreement experiment",
"sec_num": "4.2"
},
{
"text": "target(X) \u2208 [1, T ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic agreement experiment",
"sec_num": "4.2"
},
{
"text": "The number feature is derived from the POS:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic agreement experiment",
"sec_num": "4.2"
},
{
"text": "feat(X, t) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 Sg if pos(X, t) \u2208 {VBZ, NN} Pl if pos(X, t) \u2208 {VBP, NNS} n/a otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic agreement experiment",
"sec_num": "4.2"
},
{
"text": "The gold label of a sentence is the number of its verb, i.e., y(X) = feat(X, T + 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic agreement experiment",
"sec_num": "4.2"
},
{
"text": "5 www.tallinzen.net/media/rnn_ agreement/agr_50_mostcommon_10K.tsv.gz",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic agreement experiment",
"sec_num": "4.2"
},
{
"text": "As task methods, we replicate Linzen et al. (2016) 's unidirectional LSTM (R 50 randomly initialized word embeddings, hidden size 50). We also train unidirectional GRU, QGRU and QLSTM architectures with the same dimensionality. We use the explanation methods from \u00a73 to find the most relevant word for predictions on the test set. As described in \u00a72.2, explanation methods are awarded a hit target (resp. hit feat ) point if this word is the subject (resp. a noun with the predicted number feature). For reference, we use a random baseline as well as a baseline that assumes that the most relevant word directly precedes the verb.",
"cite_spans": [
{
"start": 30,
"end": 50,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic agreement experiment",
"sec_num": "4.2"
},
{
"text": "Our experiments suggest that explanation methods for neural NLP differ in quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation methods",
"sec_num": "5.1"
},
{
"text": "As in previous work (see \u00a76), gradient L2 norm (grad L2 ) performs poorly, especially on RNNs. We assume that this is due to its inability to distinguish relevances for and against k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation methods",
"sec_num": "5.1"
},
{
"text": "Gradient embedding dot product (grad dot ) is competitive on CNN (Table 2, grad dot 1p C05, grad dot 1s C10, C15), presumably because relu is linear on positive inputs, so gradients are exact in-decomp initially a pagan culture , detailed information about the return of the christian religion to the islands during the norse-era [is ...] deeplift initially a pagan culture , detailed information about the return of the christian religion to the islands during the norse-era [is ...] limsse ms p initially a pagan culture , detailed information about the return of the christian religion to the islands during the norse-era [is ...]",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 83,
"text": "(Table 2, grad dot",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Explanation methods",
"sec_num": "5.1"
},
{
"text": "Your day is done . Definitely looking forward to going back . All three were outstanding ! I would highly recommend going here to anyone . We will see if anyone returns the message my boyfriend left . The price is unbelievable ! And our guys are on lunch so we ca n't fit you in . \" It 's good , standard froyo . The pork shoulder was THAT tender . Try it with the Tomato Basil cram sauce .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "lrp",
"sec_num": null
},
{
"text": "limsse ms p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "lrp",
"sec_num": null
},
{
"text": "Your day is done . Definitely looking forward to going back . All three were outstanding ! I would highly recommend going here to anyone . We will see if anyone returns the message my boyfriend left . The price is unbelievable ! And our guys are on lunch so we ca n't fit you in . \" It 's good , standard froyo . The pork shoulder was THAT tender . Try it with the Tomato Basil cram sauce . stead of approximate. grad dot also has decent performance for GRU (grad dot 1p C01, grad dot s C{06, 11, 16, 20, 24}), perhaps because GRU hidden activations are always in [-1,1] , where tanh and \u03c3 are approximately linear.",
"cite_spans": [
{
"start": 564,
"end": 570,
"text": "[-1,1]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "lrp",
"sec_num": null
},
{
"text": "Integrated gradient (grad ) mostly outperforms simple gradient (grad 1 ), though not consistently (C01, C07). Contrary to expectation, integration did not help much with the failure of the gradient method on LSTM on 20 newsgroups (grad dot 1 vs. grad dot in C08, C13), which we had assumed to be due to saturation of tanh on large absolute activations in c. Smaller intervals may be needed to approximate the integration, however, this means additional computational cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "lrp",
"sec_num": null
},
{
"text": "The gradient of s(k, X) performs better or similar to the gradient of p(k|X). The main exception is yelp (grad dot 1s vs. grad dot 1p , C01-C05). This is probably due to conflation by p(k|X) of evidence for k (numerator in Eq 3) and against competitor classes (denominator). In a two-class scenario, there is little incentive to keep classes separate, leading to information flow through the denominator. In future work, we will replace the twoway softmax with a one-way sigmoid such that \u03c6(t, 0, X) := \u2212\u03c6(t, 1, X).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "lrp",
"sec_num": null
},
{
"text": "LRP and DeepLIFT are the most consistent explanation methods across evaluation paradigms and task methods. (The comparatively low pointing game accuracies on the yelp QRNNs and CNN (C02, C04, C05) are probably due to the fact that they explain s(k, .) in a two-way softmax, see above.) On CNN (C05, C10, C15), LRP and grad dot 1s perform almost identically, suggesting that they are indeed quasi-equivalent on this architecture (see \u00a73.2). On (Q)RNNs, modified LRP and DeepLIFT appear to be superior to the gradient method (lrp vs. grad dot 1s , deeplift vs. grad dot s , C01-C04, C06-C09, C11-C14, C16-C27).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "lrp",
"sec_num": null
},
{
"text": "Decomposition performs well on LSTM, especially in the morphosyntactic agreement exper-iment, but it is inconsistent on other architectures. Gated RNNs have a long-term additive and a multiplicative pathway, and the decomposition method only detects information traveling via the additive one. Miao et al. (2016) show qualitatively that GRUs often reorganize long-term memory abruptly, which might explain the difference between LSTM and GRU. QRNNs only have additive recurrent connections; however, given that c t (resp. h t ) is calculated by convolution over several time steps, decomposition relevance can be incorrectly attributed inside that window. This likely is the reason for the stark difference between the performance of decomposition on QRNNs in the hybrid document experiment and on the manually labeled data (C07, C09 vs. C12, C14). Overall, we do not recommend the decomposition method, because it fails to take into account all routes by which information can be propagated.",
"cite_spans": [
{
"start": 294,
"end": 312,
"text": "Miao et al. (2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "lrp",
"sec_num": null
},
{
"text": "Omission and occlusion produce inconsistent results in the hybrid document experiment. Shrikumar et al. (2017) show that perturbation methods can lack sensitivity when there are more relevant inputs than the \"perturbation window\" covers. In the morphosyntactic agreement experiment, omission is not competitive; we assume that this is because it interferes too much with syntactic structure. occ 1 does better (esp. C16-C19), possibly because an all-zero \"placeholder\" is less disruptive than word removal. But despite some high scores, it is less consistent than other explanation methods.",
"cite_spans": [
{
"start": 87,
"end": 110,
"text": "Shrikumar et al. (2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "lrp",
"sec_num": null
},
{
"text": "Magnitude-sensitive LIMSSE (limsse ms ) consistently outperforms black-box LIMSSE (limsse bb ), which suggests that numerical outputs should be used for approximation where possible. In the hybrid document experiment, magnitude-sensitive LIMSSE outperforms the other explanation methods (exceptions: C03, C05). However, it fails in the morphosyntactic agreement experiment (C16-C27). In fact, we expect LIMSSE to be unsuited for large context problems, as it cannot discover dependencies whose range is bigger than a given text sample. In Fig 3 (top) , limsse ms p highlights any singular noun without taking into account how that noun fits into the overall syntactic structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 536,
"end": 550,
"text": "In Fig 3 (top)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "lrp",
"sec_num": null
},
{
"text": "The assumptions made by our automatic evaluation paradigms have exceptions: (i) the correlation between fragment of origin and relevance does not always hold (e.g., a positive review may contain negative fragments, and will almost certainly contain neutral fragments); (ii) in morphological prediction, we cannot always expect the subject to be the only predictor for number. In Fig 2 (bottom) for example, \"few\" is a reasonable clue for plural despite not being a noun. This imperfect ground truth means that absolute pointing game accuracies should be taken with a grain of salt; but we argue that this does not invalidate them for comparisons.",
"cite_spans": [],
"ref_spans": [
{
"start": 376,
"end": 393,
"text": "In Fig 2 (bottom)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation paradigms",
"sec_num": "5.2"
},
{
"text": "We also point out that there are characteristics of explanations that may be desirable but are not reflected by the pointing game. Consider Fig 3 (bottom) . Both explanations get hit points, but the lrp explanation appears \"cleaner\" than limsse ms p , with relevance concentrated on fewer tokens.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 155,
"text": "Consider Fig 3 (bottom)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation paradigms",
"sec_num": "5.2"
},
{
"text": "6 Related work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation paradigms",
"sec_num": "5.2"
},
{
"text": "Explanation methods can be divided into local and global methods (Doshi-Velez and Kim, 2017) . Global methods infer general statements about what a DNN has learned, e.g., by clustering documents (Aubakirova and Bansal, 2016) or n-grams (K\u00e1d\u00e1r et al., 2017) according to the neurons that they activate. Li et al. (2016a) compare embeddings of specific words with reference points to measure how drastically they were changed during training. In computer vision, Simonyan et al. (2014) optimize the input space to maximize the activation of a specific neuron. Global explanation methods are of limited value for explaining a specific prediction as they represent average behavior. Therefore, we focus on local methods.",
"cite_spans": [
{
"start": 65,
"end": 92,
"text": "(Doshi-Velez and Kim, 2017)",
"ref_id": "BIBREF13"
},
{
"start": 195,
"end": 224,
"text": "(Aubakirova and Bansal, 2016)",
"ref_id": "BIBREF5"
},
{
"start": 236,
"end": 256,
"text": "(K\u00e1d\u00e1r et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 302,
"end": 319,
"text": "Li et al. (2016a)",
"ref_id": "BIBREF21"
},
{
"start": 461,
"end": 483,
"text": "Simonyan et al. (2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation methods",
"sec_num": "6.1"
},
{
"text": "Local explanation methods explain a decision taken for one specific input at a time. We have attempted to include all important local methods for NLP in our experiments (see \u00a73). We do not address self-explanatory models (e.g., attention (Bahdanau et al., 2015) or rationale models (Lei et al., 2016) ), as these are very specific architectures that may not be not applicable to all tasks.",
"cite_spans": [
{
"start": 238,
"end": 261,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 282,
"end": 300,
"text": "(Lei et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation methods",
"sec_num": "6.1"
},
{
"text": "According to Doshi-Velez and Kim (2017) 's taxonomy of explanation evaluation paradigms, application-grounded paradigms test how well an explanation method helps real users solve real tasks (e.g., doctors judge automatic diagnoses); human-grounded paradigms rely on proxy tasks (e.g., humans rank task methods based on explanations); functionally-grounded paradigms work without human input, like our approach. Arras et al. (2016 ) (cf. Samek et al. (2016 ) propose a functionally-grounded explanation evaluation paradigm for NLP where words in a correctly (resp. incorrectly) classified document are deleted in descending (resp. ascending) order of relevance. They assume that the fewer words must be deleted to reduce (resp. increase) accuracy, the better the explanations. According to this metric, LRP ( \u00a73.2) outperforms grad L2 on CNNs (Arras et al., 2016) and LSTMs (Arras et al., 2017b) on 20 newsgroups. Ancona et al. (2017) perform the same experiment with a binary sentiment analysis LSTM. Their graph shows occ 1 , grad dot 1 and grad dot tied in first place, while LRP, DeepLIFT and the gradient L1 norm lag behind. Note that their treatment of LSTM gates in LRP / DeepLIFT differs from our implementation.",
"cite_spans": [
{
"start": 13,
"end": 39,
"text": "Doshi-Velez and Kim (2017)",
"ref_id": "BIBREF13"
},
{
"start": 411,
"end": 429,
"text": "Arras et al. (2016",
"ref_id": "BIBREF2"
},
{
"start": 430,
"end": 455,
"text": ") (cf. Samek et al. (2016",
"ref_id": null
},
{
"start": 842,
"end": 862,
"text": "(Arras et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 873,
"end": 894,
"text": "(Arras et al., 2017b)",
"ref_id": "BIBREF4"
},
{
"start": 913,
"end": 933,
"text": "Ancona et al. (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation evaluation",
"sec_num": "6.2"
},
{
"text": "An issue with the word deletion paradigm is that it uses syntactically broken inputs, which may introduce artefacts (Sundararajan et al., 2017) . In our hybrid document paradigm, inputs are syntactically intact (though semantically incoherent at the document level); the morphosyntactic agreement paradigm uses unmodified inputs.",
"cite_spans": [
{
"start": 116,
"end": 143,
"text": "(Sundararajan et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation evaluation",
"sec_num": "6.2"
},
{
"text": "Another class of functionally-grounded evaluation paradigms interprets the performance of a secondary task method, on inputs that are derived from (or altered by) an explanation method, as a proxy for the quality of that explanation method. Murdoch and Szlam (2017) build a rule-based classifier from the most relevant phrases in a corpus (task method: LSTM). The classifier based on decomp ( \u00a73.4) outperforms the gradient-based classifier, which is in line with our results. Arras et al. (2017a) build document representations by summing over word embeddings weighted by relevance scores (task method: CNN). They show that K-nearest neighbor performs better on doc-ument representations derived with LRP than on those derived with grad L2 , which also matches our results. Denil et al. (2015) condense documents by extracting top-K relevant sentences, and let the original task method (CNN) classify them. The accuracy loss, relative to uncondensed documents, is smaller for grad dot than for heuristic baselines.",
"cite_spans": [
{
"start": 775,
"end": 794,
"text": "Denil et al. (2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation evaluation",
"sec_num": "6.2"
},
{
"text": "In the domain of human-based evaluation paradigms, Ribeiro et al. (2016) compare different variants of LIME ( \u00a73.6) by how well they help non-experts clean a corpus from words that lead to overfitting. Selvaraju et al. (2017) assess how well explanation methods help non-experts identify the more accurate out of two object recognition CNNs. These experiments come closer to real use cases than functionally-grounded paradigms; however, they are less scalable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation evaluation",
"sec_num": "6.2"
},
{
"text": "We conducted the first comprehensive evaluation of explanation methods for NLP, an important undertaking because there is a need for understanding the behavior of DNNs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "To conduct this study, we introduced evaluation paradigms for explanation methods for two classes of NLP tasks, small context tasks (e.g., topic classification) and large context tasks (e.g., morphological prediction). Neither paradigm requires manual annotations. We also introduced LIMSSE, a substring-based explanation method inspired by LIME and designed for NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "Based on our experimental results, we recommend LRP, DeepLIFT and LIMSSE for small context tasks and LRP and DeepLIFT for large context tasks, on all five DNN architectures that we tested. On CNNs and possibly GRUs, the (integrated) gradient embedding dot product is a good alternative to DeepLIFT and LRP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "Our implementation of LIMSSE, the gradient, perturbation and decomposition methods can be found in our branch of the keras package: www.github.com/ NPoe/keras.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code",
"sec_num": "8"
},
{
"text": "To re-run our experiments, see scripts in www.github.com/NPoe/ neural-nlp-explanation-experiment. Our LRP implementation (same repository) is adapted from Arras et al. (2017b) 6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code",
"sec_num": "8"
},
{
"text": "Consider deciding the number of[verb] in \"the children in the green house said that the big telescope [verb]\" vs. \"the children in the green house who broke the big telescope [verb]\". The local contexts of \"children\" or \"[verb]\" do not suffice to solve this problem, instead, the large context of the entire sentence has to be considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For grad dot , replace et with et \u2212 \u0113t. Since our baseline embeddings are all-zeros, this is equivalent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.yelp.com/dataset_challenge 4 http://nlp.stanford.edu/data/glove. 840B.300d.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/ArrasL/LRP_for_ LSTM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A unified view of gradientbased attribution methods for deep neural networks",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Ancona",
"suffix": ""
},
{
"first": "Enea",
"middle": [],
"last": "Ceolini",
"suffix": ""
},
{
"first": "Cengiz\u00f6ztireli",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Gross",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Neural Information Processing System",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Ancona, Enea Ceolini, Cengiz\u00d6ztireli, and Markus Gross. 2017. A unified view of gradient- based attribution methods for deep neural networks. In Conference on Neural Information Processing System, Long Beach, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Towards better understanding of gradient-based attribution methods for deep neural networks",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Ancona",
"suffix": ""
},
{
"first": "Enea",
"middle": [],
"last": "Ceolini",
"suffix": ""
},
{
"first": "Cengiz\u00f6ztireli",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Gross",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Ancona, Enea Ceolini, Cengiz\u00d6ztireli, and Markus Gross. 2018. Towards better understanding of gradient-based attribution methods for deep neu- ral networks. In International Conference on Learn- ing Representations, Vancouver, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Explaining predictions of non-linear classifiers in NLP",
"authors": [
{
"first": "Leila",
"middle": [],
"last": "Arras",
"suffix": ""
},
{
"first": "Franziska",
"middle": [],
"last": "Horn",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
}
],
"year": 2016,
"venue": "First Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leila Arras, Franziska Horn, Gr\u00e9goire Montavon, Klaus-Robert M\u00fcller, and Wojciech Samek. 2016. Explaining predictions of non-linear classifiers in NLP. In First Workshop on Representation Learn- ing for NLP, pages 1-7, Berlin, Germany.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "What is relevant in a text document?: An interpretable machine learning approach",
"authors": [
{
"first": "Leila",
"middle": [],
"last": "Arras",
"suffix": ""
},
{
"first": "Franziska",
"middle": [],
"last": "Horn",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
}
],
"year": 2017,
"venue": "PloS one",
"volume": "12",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leila Arras, Franziska Horn, Gr\u00e9goire Montavon, Klaus-Robert M\u00fcller, and Wojciech Samek. 2017a. What is relevant in a text document?: An inter- pretable machine learning approach. PloS one, 12(8):e0181142.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Explaining recurrent neural network predictions in sentiment analysis",
"authors": [
{
"first": "Leila",
"middle": [],
"last": "Arras",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
}
],
"year": 2017,
"venue": "Eighth Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "159--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leila Arras, Gr\u00e9goire Montavon, Klaus-Robert M\u00fcller, and Wojciech Samek. 2017b. Explaining recurrent neural network predictions in sentiment analysis. In Eighth Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 159-168, Copenhagen, Denmark.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Interpreting neural networks to improve politeness comprehension",
"authors": [
{
"first": "Malika",
"middle": [],
"last": "Aubakirova",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2035--2041",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malika Aubakirova and Mohit Bansal. 2016. Interpret- ing neural networks to improve politeness compre- hension. In Empirical Methods in Natural Language Processing, page 2035-2041, Austin, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Binder",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Klauschen",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
}
],
"year": 2015,
"venue": "PloS one",
"volume": "10",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Bach, Alexander Binder, Gr\u00e9goire Mon- tavon, Frederick Klauschen, Klaus-Robert M\u00fcller, and Wojciech Samek. 2015. On pixel-wise explana- tions for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations, San Diego, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Ask the GRU: Multi-task learning for deep text recommendations",
"authors": [
{
"first": "Trapit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Belanger",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM Conference on Recommender Systems",
"volume": "",
"issue": "",
"pages": "107--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trapit Bansal, David Belanger, and Andrew McCal- lum. 2016. Ask the GRU: Multi-task learning for deep text recommendations. In ACM Conference on Recommender Systems, pages 107-114, Boston, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Quasi-recurrent neural networks",
"authors": [
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2017. Quasi-recurrent neural net- works. In International Conference on Learning Representations, Toulon, France.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103- 111, Doha, Qatar.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Extraction of salient sentences from labelled documents",
"authors": [
{
"first": "Misha",
"middle": [],
"last": "Denil",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Demiraj",
"suffix": ""
},
{
"first": "Nando",
"middle": [],
"last": "De Freitas",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Misha Denil, Alban Demiraj, and Nando de Freitas. 2015. Extraction of salient sentences from labelled documents. In International Conference on Learn- ing Representations, San Diego, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A roadmap for a rigorous science of interpretability",
"authors": [
{
"first": "Finale",
"middle": [],
"last": "Doshi",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Velez",
"suffix": ""
},
{
"first": "Been",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finale Doshi-Velez and Been Kim. 2017. A roadmap for a rigorous science of interpretability. CoRR, abs/1702.08608.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "European union regulations on algorithmic decision-making and a \"right to explanation",
"authors": [
{
"first": "Bryce",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Flaxman",
"suffix": ""
}
],
"year": 2016,
"venue": "ICML Workshop on Human Interpretability in Machine Learning",
"volume": "",
"issue": "",
"pages": "26--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryce Goodman and Seth Flaxman. 2016. European union regulations on algorithmic decision-making and a \"right to explanation\". In ICML Workshop on Human Interpretability in Machine Learning, pages 26-30, New York, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Interpretation of prediction models using the input gradient",
"authors": [
{
"first": "Yotam",
"middle": [],
"last": "Hechtlinger",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yotam Hechtlinger. 2016. Interpretation of prediction models using the input gradient. In Conference on Neural Information Processing Systems, Barcelona, Spain.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Representation of linguistic form and function in recurrent neural networks",
"authors": [
{
"first": "Akos",
"middle": [],
"last": "K\u00e1d\u00e1r",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "4",
"pages": "761--780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akos K\u00e1d\u00e1r, Grzegorz Chrupa\u0142a, and Afra Alishahi. 2017. Representation of linguistic form and func- tion in recurrent neural networks. Computational Linguistics, 43(4):761-780.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Investigating the influence of noise and distractors on the interpretation of neural networks",
"authors": [
{
"first": "Pieter-Jan",
"middle": [],
"last": "Kindermans",
"suffix": ""
},
{
"first": "Kristof",
"middle": [],
"last": "Sch\u00fctt",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Sven",
"middle": [],
"last": "D\u00e4hne",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pieter-Jan Kindermans, Kristof Sch\u00fctt, Klaus-Robert M\u00fcller, and Sven D\u00e4hne. 2016. Investigating the in- fluence of noise and distractors on the interpretation of neural networks. In Conference on Neural Infor- mation Processing Systems, Barcelona, Spain.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Newsweeder: Learning to filter netnews",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 1995,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "331--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken Lang. 1995. Newsweeder: Learning to filter netnews. In International Conference on Machine Learning, pages 331-339, Tahoe City, USA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Rationalizing neural predictions",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2016,
"venue": "Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "107--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Empirical Methods in Natural Language Processing, pages 107-117, Austin, USA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Visualizing and understanding neural models in NLP",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "681--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016a. Visualizing and understanding neural mod- els in NLP. In NAACL-HLT, pages 681-691, San Diego, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Understanding neural networks through representation erasure",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Un- derstanding neural networks through representation erasure. CoRR, abs/1612.08220.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Simplifying long short-term memory acoustic models for fast training and decoding",
"authors": [
{
"first": "Yajie",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Jinyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yongqiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shi-Xiong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Gong",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "2284--2288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yajie Miao, Jinyu Li, Yongqiang Wang, Shi-Xiong Zhang, and Yifan Gong. 2016. Simplifying long short-term memory acoustic models for fast train- ing and decoding. In International Conference on Acoustics, Speech and Signal Processing, pages 2284-2288.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A humangrounded evaluation benchmark for local explanations of machine learning",
"authors": [
{
"first": "Sina",
"middle": [],
"last": "Mohseni",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"D"
],
"last": "Ragan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sina Mohseni and Eric D Ragan. 2018. A human- grounded evaluation benchmark for local explana- tions of machine learning. CoRR, abs/1801.05075.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Automatic rule extraction from long short term memory networks",
"authors": [
{
"first": "James",
"middle": [],
"last": "Murdoch",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W James Murdoch and Arthur Szlam. 2017. Auto- matic rule extraction from long short term memory networks. In International Conference on Learning Representations, Toulon, France.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543, Doha, Qatar.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Why should I trust you?: Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should I trust you?: Ex- plaining the predictions of any classifier. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135-1144, San Francisco, California.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Evaluating the visualization of what a deep neural network has learned",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Binder",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Lapuschkin",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE transactions on neural networks and learning systems",
"volume": "28",
"issue": "",
"pages": "2660--2673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wojciech Samek, Alexander Binder, Gr\u00e9goire Mon- tavon, Sebastian Lapuschkin, and Klaus-Robert M\u00fcller. 2016. Evaluating the visualization of what a deep neural network has learned. IEEE trans- actions on neural networks and learning systems, 28(11):2660-2673.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Grad-cam: Visual explanations from deep networks via gradient-based localization",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ramprasaath",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Selvaraju",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Cogswell",
"suffix": ""
},
{
"first": "Ramakrishna",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "618--626",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramprasaath R Selvaraju, Michael Cogswell, Ab- hishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual expla- nations from deep networks via gradient-based lo- calization. In IEEE Conference on Computer Vision and Pattern Recognition, pages 618-626, Honolulu, Hawaii.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning important features through propagating activation differences",
"authors": [
{
"first": "Avanti",
"middle": [],
"last": "Shrikumar",
"suffix": ""
},
{
"first": "Peyton",
"middle": [],
"last": "Greenside",
"suffix": ""
},
{
"first": "Anshul",
"middle": [],
"last": "Kundaje",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "3145--3153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avanti Shrikumar, Peyton Greenside, and Anshul Kun- daje. 2017. Learning important features through propagating activation differences. In International Conference on Machine Learning, pages 3145- 3153, Sydney, Australia.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Vedaldi",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zisser- man. 2014. Deep inside convolutional networks: Vi- sualising image classification models and saliency maps. In International Conference on Learning Representations, Banff, Canada.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Axiomatic attribution for deep networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In International Conference on Machine Learning, Sydney, Australia.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Visualizing and understanding convolutional networks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Zeiler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2014,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "818--833",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Eu- ropean Conference on Computer Vision, pages 818- 833, Z\u00fcrich, Switzerland.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Top-down neural attention by excitation backprop",
"authors": [
{
"first": "Jianming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Brandt",
"suffix": ""
},
{
"first": "Xiaohui",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Sclaroff",
"suffix": ""
}
],
"year": 2016,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "543--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianming Zhang, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. 2016. Top-down neural at- tention by excitation backprop. In European Con- ference on Computer Vision, pages 543-559, Ams- terdam, Netherlands.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Visualizing deep neural network decisions: Prediction difference analysis",
"authors": [
{
"first": "M",
"middle": [],
"last": "Luisa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zintgraf",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Taco",
"suffix": ""
},
{
"first": "Tameem",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luisa M Zintgraf, Taco S Cohen, Tameem Adel, and Max Welling. 2017. Visualizing deep neural net- work decisions: Prediction difference analysis. In International Conference on Learning Representa- tions, Toulon, France.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Top: verb context classified singular. Green: evidence for singular. Task method: GRU. Bottom: verb context classified plural. Green: evidence for plural. Task method: LSTM. Underlined: subject. Bold: rmax position."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Top: verb context classified singular. Task method: LSTM. Bottom: hybrid yelp review, classified positive. Task method: QLSTM."
},
"TABREF1": {
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table",
"text": "Terminology with examples."
},
"TABREF2": {
"content": "<table><tr><td>grad dot s</td></tr><tr><td>).</td></tr><tr><td>2.2 Large context: Morphosyntactic</td></tr><tr><td>agreement paradigm</td></tr><tr><td>Many natural languages display morphosyntactic</td></tr><tr><td>agreement between words v and w. A DNN that</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "the link provided by the editor above [encourages ...] lrp the link provided by the editor above [encourages ...] limsse bb the link provided by the editor above [encourages ...] grad L2 s few if any events in history [are ...] occ1 few if any events in history [are ...] limsse ms s few if any events in history [are ...]"
},
"TABREF3": {
"content": "<table><tr><td>, C11-C15). It contains</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": ".61 .68 .67 .70 .68 .45 .47 .25 .33 .79 .26 .31 .07 .18 .74 .48 .23 .63 .19 .52 .27 .73 .22 .09 .11 .19 .19 grad L2 1p .57 .67 .67 .70 .74 .40 .43 .26 .34 .70 .18 .35 .07 .13 .66 .48 .22 .63 .18 .53 .26 .73 .21 .09 .09 .18 .11 grad L2 s .71 .66 .69 .71 .70 .58 .32 .26 .21 .82 .23 .15 .11 .08 .76 .69 .67 .68 .51 .73 .70 .75 .55 .19 .22 .20 .20 grad L2 p .71 .70 .72 .71 .77 .56 .34 .30 .23 .81 .13 .08 .14 .01 .78 .68 .77 .50 .70 .74 .82 .54 .78 .19 .21 .19 .30 grad dot1s.88 .85 .81 .77 .86 .79 .76 .59 .72 .89 .80 .70 .14 .47 .79 .81 .62 .73 .56 .85 .66 .81 .59 .42 .34 .46 .36 grad dot 1p .92 .88 .84 .79 .95 .78 .72 .59 .72 .81 .71 .59 .20 .44 .69 .79 .58 .74 .54 .83 .61 .81 .56 .41 .33 .46 .35 grad dot s .84 .90 .85 .87 .87 .81 .68 .60 .68 .89 .82 .64 .21 .26 .80 .90 .87 .78 .84 .94 .92 .83 .89 .54 .51 .46 .52 grad dot p .86 .89 .84 .89 .96 .80 .69 .62 .73 .89 .80 .53 .40 .54 .78 .87 .85 .68 .84 .93 .92 .74 .93 .53 .48 .42 .51 omit1 .79 .82 .85 .87 .61 .78 .75 .54 .76 .82 .80 .48 .33 .48 .65 .81 .81 .79 .80 .86 .87 .86 .84 .43 .45 .44 .45 omit3 .89 .80 .89 .88 .59 .79 .71 .72 .81 .76 .77 .37 .36 .49 .61 .74 .77 .73 .73 .82 .84 .82 .79 .41 .45 .42 .46 omit7 .92 .88 .91 .91 .70 .79 .77 .77 .84 .84 .77 .49 .44 .55 .65 .76 .80 .66 .74 .85 .88 .78 .80 .40 .48 .43 .47 occ1 .80 .71 .74 .84 .61 .78 .73 .60 .77 .82 .77 .49 .19 .10 .65 .91 .85 .86 .86 .94 .88 .89 .88 .50 .44 .46 .47 occ3 .92 .61 .93 .85 .59 .78 .63 .74 .74 .76 .74 .37 .32 .35 .61 .74 .73 .71 .72 .78 .76 .76 .76 .43 .37 .41 .43 occ7 .92 .77 .93 .90 .70 .78 .62 .74 .77 ."
},
"TABREF5": {
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table",
"text": ""
}
}
}
}