| { |
| "title": "Local Interpretations for Explainable Natural Language Processing: A Survey", |
| "abstract": "As the use of deep learning techniques has grown across various fields over the past decade, complaints about the opaqueness of the black-box models have increased, resulting in an increased focus on transparency in deep learning models. This work investigates various methods to improve the interpretability of deep neural networks for Natural Language Processing (NLP) tasks, including machine translation and sentiment analysis. We provide a comprehensive discussion on the definition of the term interpretability and its various aspects at the beginning of this work. The methods collected and summarised in this survey are only associated with local interpretation and are specifically divided into three categories: 1) interpreting the model\u2019s predictions through related input features; 2) interpreting through natural language explanation; 3) probing the hidden states of models and word representations.", |
| "sections": [ |
| { |
| "section_id": "1", |
| "parent_section_id": null, |
| "section_name": "1. Introduction", |
| "text": "As a result of the explosive development of deep learning techniques over the past decade, the performance of deep neural networks (DNN) has significantly improved across various tasks. DNN has been broadly applied in different fields, including business, healthcare, and justice. For example, in healthcare, artificial intelligence startups have raised $864 million in the second quarter of 2019, with higher amounts expected in the future as reported by the TDC Group111https://www.thedoctors.com/articles/the-algorithm-will-see-you-now-how-ais-healthcare-potential-outweighs-its-risk/ ###reference_gorithm-will-see-you-now-how-ais-healthcare-potential-outweighs-its-risk/###. However, while deep learning models have brought many foreseeable benefits to both patients and medical practitioners, such as enhanced image scanning and segmentation, the inability of these models to provide explanations for their predictions is still a severe risk, limiting their application and utility.\nBefore demonstrating the importance of the interpretability of deep learning models, it is essential to illustrate the opaqueness of DNNs compared to other interpretable machine learning models. Neural networks roughly mimic the hierarchical structures of neurons in the human brain to process information among hierarchical layers. Each neuron receives the information from its predecessors and passes the outputs to its successors, eventually resulting in a final prediction (Mueller and\nMassaron, 2019 ###reference_b123###). DNNs are neural networks with a large number of layers, meaning they contain up to billions of parameters. Compared to interpretable machine learning models such as linear regressions, where the few parameters in the model can be extracted as the explanation to illustrate influential features in prediction, or the decision trees, where a model\u2019s prediction process can be easily understood by following the decision rules, the complex and huge computations done by DNNs are hard to comprehend both for experts and non-experts alike. In addition, the representations used and constructed by DNNs are often complex and incredibly difficult to tie back to a set of observable variables in image and natural language processing tasks. As such, vanilla DNNs are often regarded as opaque \u2018black-box\u2019 models that have neither interpretable architectures nor clear features for interpretation of the model outputs.\nHowever, why should we want interpretable DNNs? One fundamental reason is that while the recent application of deep learning techniques to various tasks has resulted in high levels of performance and accuracy, these techniques still need improvement. As such, when applying these models to critical tasks where prediction results can cause significant real-world impacts, they are not guaranteed to provide faultless predictions. Furthermore, given any decision-making system, it is natural to demand explanations for the decisions provided. For example, the European Parliament adopted the General Data Protection Regulation (GDPR) in May 2018 to clarify the right of explanation for all individuals to obtain \u201cmeaningful explanations of the logic involved\u201d for automated decision-making procedures (Guidotti et al., 2018 ###reference_b61###). As such, it is legally and ethically crucial for the application of DNNs to develop and design ways for these networks to provide explanations for their predictions. In addition, explanations of predictions would help specialists verify their correctness, allowing them to judge if a model is making the right predictions for the right reasons. As such, increasing interpretability is vital for expanding the applicability and correctness of DNNs.\nIn the past few years, several works have been proposed to improve the interpretability of DNNs. In this survey paper, we focus on local interpretable methods proposed for natural language processing tasks. As described in the following sections, we define local methods as those which provide explanations only for specific decisions made by the model - that is, methods that provide explanations for single instances rather than aiming to provide general descriptions of the model\u2019s decision-making process. We explore several recent local interpretation methods/techniques in Natural Language Processing (NLP), which aims to support the normal users with no machine/deep learning expertise222Note that there are several local interpretation methods, such as counterfactuals, example-based approaches, have not been included in this article since only a few initial NLP research tasks have been conducted with these example-based approaches.:\nFeature importance methods, which work by determining and extracting the most important elements of an input instance.\nNatural language explanation, in which models generate text explanations for a given prediction.\nProbing, in which model\u2019s internal states are examined when given certain inputs." |
| }, |
| { |
| "section_id": "1.1", |
| "parent_section_id": "1", |
| "section_name": "1.1. Definitions of Interpretability", |
| "text": "While there has been much study of the interpretability of DNNs, there are no unified definitions for the term interpretabilty, with different researchers defining it from different perspectives. We summarise the key aspects of interpretability used by these researchers below." |
| }, |
| { |
| "section_id": "1.1.1", |
| "parent_section_id": "1.1", |
| "section_name": "1.1.1. Explainability vs Interpretability", |
| "text": "The terms interpretability and explainability are often used synonymously across the field of explainable AI (Chakraborty\net al., 2017 ###reference_b28###; Adadi and Berrada, 2018 ###reference_b2###), with both terms being used to refer to the ability of a system to justify or explain the reasoning behind its decisions333For example, Stadelmaier and\nPad\u00f3 (2019 ###reference_b164###); Stahlberg\net al. (2018 ###reference_b165###); Liu\net al. (2019b ###reference_b104###); Wang et al. (2019b ###reference_b180###) primarily use explainability or explainable, while Serrano and Smith (2019a ###reference_b153###); Ribeiro\net al. (2016b ###reference_b147###); Camburu et al. (2018 ###reference_b24###); Tutek and\n\u0160najder (2018 ###reference_b172###) primarily use interpretable or interpretability.. Overall, the machine learning community tends to use the term interpretability, while the HCI community tends to use the term explainability (Adadi and Berrada, 2018 ###reference_b2###). Recent work has suggested more formal definitions of these terms (Chakraborty\net al., 2017 ###reference_b28###; Doshi-Velez and\nKim, 2018 ###reference_b47###; Guidotti et al., 2018 ###reference_b61###). Following Doshi-Velez and\nKim (2018 ###reference_b47###), we define interpretability as \u2018the ability [of a model] to explain or to present [its predictions] in understandable terms to a human.\u2019 We take explainability to be synonymous with interpretability unless otherwise stated, reflecting its general usage within the field." |
| }, |
| { |
| "section_id": "1.1.2", |
| "parent_section_id": "1.1", |
| "section_name": "1.1.2. Local and Global Interpretability", |
| "text": "An essential distinction in interpretable machine learning is between local and global interpretability. Following Guidotti et al. (2018 ###reference_b61###) and Doshi-Velez and\nKim (2018 ###reference_b47###), we take local interpretability to be \u2018the situation in which it is possible to understand only the reasons for a specific decision\u2019 (Guidotti et al., 2018 ###reference_b61###). That is, a locally interpretable model is a model that can give explanations for specific predictions and inputs. We take global interpretability to be the situation in which it is possible to understand \u2018the whole logic of a model and follow the entire reasoning leading to all the different possible outcomes\u2019 (Guidotti et al., 2018 ###reference_b61###). A classic example of a globally interpretable model is a decision tree, in which the general behaviour of the model may be easily understood by examining the decision nodes that make up the tree. As understanding the whole logic of a model often requires the use of specific models or significant changes to an existing model, in this paper, we focus on local interpretation methods, as these tend to be more generally applicable to existing and future NLP models." |
| }, |
| { |
| "section_id": "1.1.3", |
| "parent_section_id": "1.1", |
| "section_name": "1.1.3. Post-hoc vs In-built Interpretations", |
| "text": "Another important distinction is whether an interpretability method is applied to a model after the fact or integrated into the internals of a model. The former is referred as a post-hoc interpretation method (Molnar, 2019 ###reference_b121###), while the latter is an in-built interpretation method. As Post-hoc methods are applied to model the fact, they generally do not impact the model\u2019s performance. Some post-hoc methods do not require any access to the internals of the model being explained and so are model-agnostic. An example of a typical post-hoc interpretable method is LIME (Ribeiro\net al., 2016a ###reference_b146###), which generates the local interpretation for one instance by permuting the original inputs of an underlying black-box model. In contrast to post-hoc interpretations, in-built interpretations are closely integrated into the model itself. The interpretation may come from the transparency of the model, where the workings of the model itself are clear and easy to understand (for example, a decision tree), or may come from an interpretation generated by the model in an opaque manner (for example, a model that generates a text explanation during its prediction process). In this survey, we will examine both methods." |
| }, |
| { |
| "section_id": "1.2", |
| "parent_section_id": "1", |
| "section_name": "1.2. Paper layout", |
| "text": "Before examining interpretability methods, we first discuss different aspects of interpretability in Section 2. In Section 3 we summarize and categorize three main interpretable methods in NLP, including 1) improving a model\u2019s interpretability by identifying the important input features; 2) explaining a model\u2019s predictions by generating direct natural language explanations; 3) probing the internal state and mechanisms of a model. We also provide a quick summary of datasets that are commonly used for the study of each method. In Section 4, we summarise several primary methods to evaluate the interpretability of each method discussed in Section 3. We finally discussed the limitations of current interpretable methods in NLP in Section 5 and the possible future trend of interpretability development at the end." |
| }, |
| { |
| "section_id": "2", |
| "parent_section_id": null, |
| "section_name": "2. Aspects of Interpretability", |
| "text": "" |
| }, |
| { |
| "section_id": "2.1", |
| "parent_section_id": "2", |
| "section_name": "2.1. Interpretability requirements", |
| "text": "Before discussing the various aspects of interpretability, it is also essential to consider what problems require interpretable solutions and what interpretable models best fit these problems. Following (Doshi-Velez and\nKim, 2018 ###reference_b47###), we suggest that anyone looking to build interpretable models first determine the following four points:\nDo you need an explanation for a specific instance or understand how a model works? In the former case, local interpretation methods will likely prove more suitable, while in the latter global interpretation methods will be required.\nHow much time does/will a user have to understand the explanation? This, along with the point below, is an important concern for the usability of an interpretation method. Certain methods lend themselves to quick, intuitive understanding, while others require some more effort and time to comprehend.\nWhat background and expertise will the users of your interpretable model have? As mentioned, this is an important usability concern. For example, regression weights have classically been considered \u2018interpretable\u2019, but require a user to have some understanding of regression beforehand. In contrast, decision trees (when rendered in a tree structure) are often understandable even to non-experts.\nWhat aspects or parts of the problem do you want to explain? It is important to consider what can and cannot be explained by your model, and prioritise accordingly. For example, explaining all potential judgements a self-driving car could make in any situation is infeasible, but restricting explanations to certain systems or situations allows easier measuring and assurance of interpretation quality.\nThese points allow categorisation of interpretability-related problems, and thus clearer understanding of what is required from an interpretable system and suitable interpretation methods for the problem itself." |
| }, |
| { |
| "section_id": "2.2", |
| "parent_section_id": "2", |
| "section_name": "2.2. Dimensions of Interpretability", |
| "text": "\u2018Interpretability\u2019 is not a simple binary or monolithic concept, but rather one that can be measured along multiple dimensions. Different aspects of interpretability have been identified across the literature, which we condense and summarise into four key aspects: faithfulness, stability, comprehensibility, and trustworthiness." |
| }, |
| { |
| "section_id": "2.2.1", |
| "parent_section_id": "2.2", |
| "section_name": "2.2.1. Faithfulness", |
| "text": "Faithfulness measures how well an interpretation method reflects the decision-making process used by the underlying model. For example, an image heatmap that highlights parts of the image not genuinely used by the model would be unfaithful, while highlighting the parts genuinely used by the model would be more faithful. Traditionally, this has been more a concern for post-hoc methods such as LIME (Ribeiro\net al., 2016a ###reference_b146###) and SHAP (Lundberg and Lee, 2017 ###reference_b109###). However, more recent work has called into question the faithfulness of in-built interpretability methods such as attention weight examination (Wiegreffe and\nPinter, 2019a ###reference_b185###; Jacovi and\nGoldberg, 2020a ###reference_b78###; Jain and Wallace, 2019a ###reference_b81###). Faithfulness is essential for claims that an interpretation method accurately reflects a model\u2019s process to reach a judgement. Explanations provided by an unfaithful method may hide existing biases that the underlying model uses for judgements, potentially engendering unwarranted trust or belief in these predictions (Jacovi and\nGoldberg, 2020a ###reference_b78###). Related is the notion of fidelity as defined in Molnar (2019 ###reference_b121###): the extent of how well an interpretable method can approximate the performance of a black-box model. Underlying this definition is the assumption that a method that better approximates a black box also must use a similar reasoning process to that underlying model444This is stated as \u2018the model assumption\u2019 in Jacovi and\nGoldberg (2020a ###reference_b78###).. As such, this definition of fidelity is a more specific form of faithfulness as applied to interpretability methods that construct models approximating an underlying black-box model, such as LIME (Ribeiro\net al., 2016b ###reference_b147###)." |
| }, |
| { |
| "section_id": "2.2.2", |
| "parent_section_id": "2.2", |
| "section_name": "2.2.2. Stability", |
| "text": "An interpretation method is stable if it provides similar explanations for similar inputs (Molnar, 2019 ###reference_b121###) unless the difference between the inputs is highly important for the task at hand. For example, an explanation produced by natural language generation (NLG) would be stable if minor differences in the input resulted in similar text explanations and would be unstable if the slight differences resulted in wildly different explanations. Stability is a generally desirable trait important for research (Yu, 2013 ###reference_b194###) and is required for a model to be trustworthy (Murdoch et al., 2019 ###reference_b124###). In addition, the stability of human explanations for a particular task should be considered, i.e. if explanations written by humans differ significantly from each other, it is unreasonable to expect a model trained on such explanations to do any better. This is especially important for highly free-form interpretation methods such as natural language explanations." |
| }, |
| { |
| "section_id": "2.2.3", |
| "parent_section_id": "2.2", |
| "section_name": "2.2.3. Comprehensibility", |
| "text": "An interpretation is considered comprehensible if it\u2019s understandable to an end-user. In order for an explanation to be useful at all, it must be understandable to some degree. However, this is subjective: there is no global common standard for \u2018understandability\u2019. In addition, the background of the end-user matters: a medical professional will be able to understand an explanation with scientific medical terms far better than a layperson. Nevertheless, there are still several general ways to rate the interpretability of an explanation: examining its size (how much a user must process when \u2018reading\u2019 the explanation), examining how well a human can predict a model\u2019s prediction given just the explanation, and examining the understandability of individual features of the explanation (Molnar, 2019 ###reference_b121###). For example, a sparse linear model with only a few non-zero weights has far fewer components for a user to consider, and so would be more comprehensible than a linear model with hundreds of weights. Furthermore, comprehensibility is related to the concept of transparency (Lipton, 2018 ###reference_b103###), which refers to how well a person can understand the mechanism by which a model works. Transparency can be achieved in several ways: through being able to simulate the model in your mind (for example, a linear regression with few weights), or having deep knowledge of the underlying algorithm used by the model (for example, proving some property of any solution an algorithm will produce). Models with greater degrees of transparency are thus also more comprehensible than non-transparent models." |
| }, |
| { |
| "section_id": "2.2.4", |
| "parent_section_id": "2.2", |
| "section_name": "2.2.4. Trustworthiness", |
| "text": "An interpretation is considered comprehensible if it is understandable to an end-user. In order for an explanation to be helpful at all, it must be understandable to some degree. However, this is subjective. In other words, there is no global shared standard for \u2018understandability\u2019. In addition, the background of the end-user matters: a medical professional will be able to understand an explanation with scientific medical terms far better than a layperson. Nevertheless, there are still several general ways to rate the interpretability of an explanation: examining its size (how much a user must process when \u2018reading\u2019 the explanation), examining how well a human can predict a model\u2019s prediction given just the explanation, and examining the understandability of individual features of the explanation (Molnar, 2019 ###reference_b121###). For example, a sparse linear model with only a few non-zero weights has far fewer components for a user to consider and would be more comprehensible than a linear model with hundreds of weights. Furthermore, comprehensibility is related to the concept of transparency (Lipton, 2018 ###reference_b103###), which refers to how well a person can understand the mechanism by which a model works. There are several ways to achieve transparency: through being able to simulate the model in your mind (for example, a linear regression with few weights) or having deep knowledge of the underlying algorithm used by the model (for example, proving some property of any solution an algorithm will produce). Models with greater transparency degrees are thus more comprehensible than non-transparent models.\n###figure_1###" |
| }, |
| { |
| "section_id": "3", |
| "parent_section_id": null, |
| "section_name": "3. Interpretable Methods", |
| "text": "" |
| }, |
| { |
| "section_id": "3.1", |
| "parent_section_id": "3", |
| "section_name": "3.1. Feature Importance", |
| "text": "Identifying the important input features that significantly impact a model\u2019s prediction results is a straightforward method of improving a model\u2019s local interpretability, directly linking model outputs to inputs. Important features can be, for example, words for text-based tasks or image regions for image-based tasks. This paper focuses on the four main different methods of extracting important features as the interpretation for the model\u2019s outputs: rationale extraction, input perturbation, attribution methods and attention weight extraction. We conclude the typology of feature importance methods in Figure 2 ###reference_### and present the sample visualisations of extracted features from inputs in Figure 1 ###reference_###." |
| }, |
| { |
| "section_id": "3.1.1", |
| "parent_section_id": "3.1", |
| "section_name": "3.1.1. Rationale Extraction", |
| "text": "Rationale extractions are usually used as the local interpretable method for NLP tasks of sentiment analysis and document classification. Rationales are short and coherent phrases from the original textual inputs and represent the critical textual features that contribute most to the output prediction. These identified textual features work as the local explanation that interprets the information the model primarily pays attention to when making the prediction decision for a particular textual input. The good rationales valid for the explanation should lead to the same prediction results as the original textual inputs. As this work area developed, researchers also made extra efforts to extract coherent and consecutive rationales to use them as more readable and comprehensive explanations.\nThe rationale extraction methods can be mainly divided into two streams: 1). a sequential selector-predictor stacked model, where the selector first selects the rationales from the original textual inputs and then pass to the predictor for the prediction result; 2). the adversarial-based model that involves the parallel models to calibrate the rationales extracted by the selector. In this paper, we summarise several iconic and milestone works of rationale extractions for each stream.\nFor the selector-predictor stream, Lei\net al. (2016 ###reference_b95###) is one of the first works for the rationale extraction in NLP tasks. The selector process first generates a binary vector of 0 and 1 through a Bernoulli distribution conditioned on the original textual inputs. This binary vector will then be multiplied over the original inputs where 1 indicates the selection of input words as rationales and 0 indicates the non-selection, resulting in a sparse input representation that indicates which textual tokens are selected as rationales and which tokens are not. The predictor will then process based on such information. Since the selected rationales are represented with non-differentiable discrete values, the REINFORCE algorithm (Williams, 1992 ###reference_b187###) was applied for optimization to update the binary vectors for the eventually accurate rational selection. Lei\net al. (2016 ###reference_b95###) performed rationale extraction for a sentiment analysis task with the training data that has no pre-annotated rationales to guide the learning process. The training loss is calculated through the difference between a ground truth sentiment vector and a predicted sentiment vector generated from extracted rationales selected by the selector model. Such selector-predictor structure is designed to mainly boost the interpretability faithfulness, i.e. selecting valid rationales that can predict the accurate output as the original textual inputs. To increase the readiness of the explanation, Lei\net al. (2016 ###reference_b95###) used two different regularizers over the loss function to force rationales to be consecutive words (readable phrases) and limit the number of selected rationales (i.e. selected words/phrases). Bastings\net al. (2019 ###reference_b18###) followed the same selector-predictor structure as Lei\net al. (2016 ###reference_b95###). The main difference is that they used rectified Kumaraswamy distribution (Kumaraswamy, 1980 ###reference_b93###) instead of Bernoulli distribution to generate the rationale selection vector, i.e. the binary vector of 0 and 1 to be masked over textual inputs. Kumaraswamy distribution allows the gradient estimation for optimization, so there is no need for the REINFORCE algorithm to do the optimization. To boost the short and coherent rationales for better readability and comprehensibility, Bastings\net al. (2019 ###reference_b18###) also applied a relaxed form of regularization (Louizos\net al., 2018 ###reference_b106###) and the Lagrangian relaxation to encourage adjacent words selected or not selected together. Different from the above methods, where rationale extraction is wrapped in an end-to-end model and has not used annotated rationales during the training of rationale selection, Du\net al. (2019a ###reference_b48###) uses rationales annotated by external experts as guidance during the training of rationale selector to generate the local explanations (short and coherent rationales) that are consistent with these external human-annotated rationales.\nFor the stream of adversarial-based models, a third module is usually added in addition to the selector-predictor stacks, functioning as a guide to boost the faithfulness of rationales and improve the comprehensibility of interpretation. For example, to boost the faithfulness of extracted rationales, Yu\net al. (2019a ###reference_b195###) inserted the target labels of sentiment analysis as additional inputs into the rationale selector to boost its participation in prediction. Additionally, to improve the comprehensibility that prevents the rationale selector from selecting meaningless small snippets, this work added a third element: a complement predictor. This additional module predicts the labels for original textual inputs based on non-rationale words. The complement predictor and the generator work much like the discriminative and generative networks in generative adversarial networks (GANs) (Goodfellow et al., 2014 ###reference_b59###): the rationale selector aims to extract as many prediction-relevant words as possible as rationales to avoid the complement predictor from being able to predict the actual textual label. Similar to Yu\net al. (2019a ###reference_b195###), Chang\net al. (2019 ###reference_b30###) also involved a third module where the target labels of the original inputs are used as additional inputs, but with the addition that these target labels can be incorrect. This work also proposed a counterfactual rational generator to extract relevant rationales that cause false predictions. A discriminator is then applied to discriminate between the actual and counterfactual rationale generator. Recent work such as (Sha\net al., 2021 ###reference_b155###) reduces the complexity of using three modules but constructs a guider model that operates over the original textual inputs for prediction and the rationale selector model in the adversarial-based architecture to encourage the final prediction vectors from two separate models to be close to each other, and thus achieve the faithfulness of extracted rationales. Also, to achieve better comprehensibility, (Sha\net al., 2021 ###reference_b155###) proposed language models as a regularizer, which significantly contributes to the better fluency of the extracted rationale by selecting consecutive tokens that describe the rationale well.\nIn general, using extracted rationales from original textual inputs as the models\u2019 local interpretations focuses on the faithfulness and comprehensibility of interpretations. While trying to select rationales that can well represent the complete inputs in terms of accurate prediction results, extracting short and consecutive sub-phrases is also the key objective of the current rationale extraction works. Such fluent and consecutive sub-phrases (i.e. the well-extracted rationales) make this rationales extraction a friendly, interpretable method that provides readable and understandable explanations to non-expert users without NLP-related knowledge.\nfor tree=\nforked edges,\nfont=,\ndraw,\ngrow\u2019 = 0,\nsemithick,\nrounded corners,\ntext width = 2.3cm,\ns sep = 6pt,\nnode options = align = center,\ncalign=child edge,\ncalign child=(n_children()+1)/2\n\n[Feature Importance, fill=gray!35\n[Rationale Extraction, for tree=fill=violet!35\n[Sequential Selector-Predictor\n[Lei\net al. (2016 ###reference_b95###);\nBastings\net al. (2019 ###reference_b18###);\nDu\net al. (2019a ###reference_b48###)]\n]\n[Adversarial-based\n[Yu\net al. (2019a ###reference_b195###);\nChang\net al. (2019 ###reference_b30###);\nSha\net al. (2021 ###reference_b155###)\n]\n]\n]\n[Input Perturbation, for tree=fill=blue!35\n[Ribeiro\net al. (2016a ###reference_b146###);\nBasaj et al. (2018 ###reference_b17###);\nRibeiro\net al. (2018 ###reference_b148###);\nAlvarez-Melis and\nJaakkola (2017 ###reference_b6###);\nFeng et al. (2018 ###reference_b57###);\n(Slack et al., 2020 ###reference_b159###)\n]\n[Generating Counterfactual Explanations based on Input Perturbation, for tree=fill=blue!10\n[Wu\net al. (2021 ###reference_b189###);\nRibeiro\net al. (2020 ###reference_b149###);\nChen et al. (2021 ###reference_b32###)\n]\n]\n]\n[Attention Weights, for tree=fill=cyan!35\n[Works with attention explanation\n[Sentiment Analysis\n[Luo\net al. (2018 ###reference_b110###);\nMao\net al. (2019 ###reference_b115###);\nWang\net al. (2018 ###reference_b178###)\n]\n]\n[Question Answering\n[Tu\net al. (2020 ###reference_b171###);\nShen\net al. (2018 ###reference_b156###);\nSydorova\net al. (2019 ###reference_b169###)\n]\n]\n[Neural Machine Translation\n[Bahdanau\net al. (2014 ###reference_b15###);\nLuong\net al. (2015 ###reference_b112###)\n]\n]\n[Visual Question Answering\n[Lu\net al. (2016 ###reference_b108###);\nYang\net al. (2016 ###reference_b191###);\nYu\net al. (2019b ###reference_b196###);\n(Luo\net al., 2020 ###reference_b111###)\n]\n]\n[Image Captioning\n[Xu et al. (2015 ###reference_b190###);\nAnderson et al. (2018 ###reference_b8###)\n]\n]\n]\n[Debates over Attention as valid explanation\n[Support\n[Wiegreffe and\nPinter (2019b ###reference_b186###);\nJacovi and\nGoldberg (2020b ###reference_b79###)\n]\n]\n[Against\n[Bai\net al. (2021 ###reference_b16###);\nClark\net al. (2019b ###reference_b36###);\nJain and Wallace (2019b ###reference_b82###);\nSerrano and Smith (2019b ###reference_b154###)\n]\n]\n]\n[Works to improve attention faithfulness\n[Bai\net al. (2021 ###reference_b16###);\nChrysostomou and\nAletras (2021 ###reference_b34###)\n]\n]\n]\n[Attribution Methods, for tree=fill=teal!35\n[Sundararajan\net al. (2017 ###reference_b168###);\nHe\net al. (2019 ###reference_b68###);\nMudrakarta et al. (2018 ###reference_b122###);\nDu\net al. (2019b ###reference_b49###);\nDing\net al. (2017 ###reference_b45###);\nBach et al. (2015 ###reference_b14###)\n]\n]\n]" |
| }, |
| { |
| "section_id": "3.1.2", |
| "parent_section_id": "3.1", |
| "section_name": "3.1.2. Input Perturbation", |
| "text": "Another method for identifying important features of textual inputs is input perturbation. For this method, a word (or a few words) of the original input is modified or removed (i.e. \u2018perturbed\u2019), and the resulting performance change is measured. The more significant the model\u2019s performance drop, the more critical these words are to the model and therefore are regarded as important features. Input perturbation is usually model-agnostic, which does not influence the original model\u2019s architecture. The main difference among the proposed input perturbation methods lies in how to perturb the tokens or phrases from original inputs into the new instances.\nRibeiro\net al. (2016a ###reference_b146###) proposed a local interpretable model-agnostic explanations (LIME) model that can be used as an interpretable method for any black-box model. The main idea of LIME is the approximation of a black-box model with a transparent model using variants of original inputs. For natural language processing tasks such as text classification, words of original textual inputs are randomly selected and removed from the inputs, using a binary representation to mark the inclusion of words. Basaj et al. (2018 ###reference_b17###) applied LIME to a QA task for identifying the important words in a question, where the words in the questions are considered to be features, while the associated context (i.e. text containing the answer to the given question) was held constant. The results indicate that in QA tasks, the complete sentence of question plays a minor role, and just a small amount of question words are sufficient for correct answer prediction.\nRibeiro\net al. (2018 ###reference_b148###) argued that the important features identified by Ribeiro\net al. (2016a ###reference_b146###) are based on word-level (single token) instead of phrase-level (consecutive tokens) features. Word-level features relate to only one instance and cannot provide general explanations, which makes it difficult to extend such explanations to unseen instances. For example, in sentiment analysis, \u2018not\u2019 in \u2018The movie is not good\u2019 is a contributing feature for negative sentiment but is not a contributing feature for positive sentiment in \u2018The weather is not bad\u2019. The single token \u2018not\u2019 is insufficient as a general explanation for unseen instances as it will lead to different meanings when combined with different words. Thus, Ribeiro\net al. (2018 ###reference_b148###) emphasized the phrase-level features for more comprehensive local interpretations and proposed a rule-based method for identifying critical features for predictions. Their proposed algorithm iteratively selects predicates from inputs as key tokens while replacing the rest of the tokens with random tokens that have the same POS tags and similar word embeddings. If the probability of classifying the perturbed text into the same class as that of the original text is above a predefined threshold, the selected predicates will be considered as the ultimate key features to interpret the prediction results.\nSimilar to Ribeiro\net al. (2018 ###reference_b148###, 2016a ###reference_b146###), Alvarez-Melis and\nJaakkola (2017 ###reference_b6###) also proposed a model-agnostic interpretable method to relate inputs to outputs through the use of perturbed inputs generated by a variational auto-encoder applied to the original input. The perturbed input is supposed to have a similar meaning to the original input. A bipartite graph is then constructed to link these perturbed inputs and outputs, and the graph is then partitioned to highlight the relevant parts to show which inputs are relevant to the specific output tokens.\nFeng et al. (2018 ###reference_b57###) proposed a method to gradually remove unimportant words from original texts while maintaining the model\u2019s performance. The remaining words are then considered as the important features for prediction. The importance of each token of the textual input is measured through a gradient approximation method, which involves taking the dot product between a given token\u2019s word embedding and the gradients of its output with respect to its word embedding (Ebrahimi\net al., 2018 ###reference_b50###). The authors show that while the reduced inputs are nonsensical to humans, they are still enough for a given model to maintain a similar level of accuracy when compared with the original inputs.\nThe input perturbation method seems straightforward in identifying the significant input features by measuring the target task\u2019s performance changes with new perturbed instances. However, there are also works questioning the faithfulness of input perturbation. For example, (Slack et al., 2020 ###reference_b159###) conducted several experiments and argued that when the distributions of perturbed instances and original instances are less similar, the explanations of LIME (Ribeiro\net al., 2016a ###reference_b146###) are not faithful. Another problem of most input perturbation explanations is that the identified important features are mostly independent tokens instead of coherent phrases like argued by Ribeiro\net al. (2018 ###reference_b148###), which limits comprehensibility. The recent new track of local explanation: counterfactual explanations (Wu\net al., 2021 ###reference_b189###; Ribeiro\net al., 2020 ###reference_b149###; Chen et al., 2021 ###reference_b32###) are generated via the approaches of input perturbation to provide counterfactual explanations to show what would happen if some certain features are replaced and prove those features are important for particular model decision. These counterfactual explanations extend beyond the input perturbation from the simple word-level to present the interpretation differently with the more straightforward counterfactual examples. Such presentation of the input perturbation interpretation would give normal users a more intuitive understanding." |
| }, |
| { |
| "section_id": "3.1.3", |
| "parent_section_id": "3.1", |
| "section_name": "3.1.3. Attention weights", |
| "text": "Attention weight is a weighted sum score of input representation in intermediate layers of neural networks (Bahdanau\net al., 2014 ###reference_b15###). Extracting attention weights for inputs to provide local interpretations for predictions is commonly used among models that utilise attention mechanisms. For NLP tasks with only textual inputs, tokens with higher attention weights are considered to have more impact on the outputs during the neural network training and are, therefore, regarded as the more important features. Attention weights have been used for explainability in sentiment analysis (Luo\net al., 2018 ###reference_b110###; Mao\net al., 2019 ###reference_b115###; Wang\net al., 2018 ###reference_b178###), question answering (Tu\net al., 2020 ###reference_b171###; Shen\net al., 2018 ###reference_b156###; Sydorova\net al., 2019 ###reference_b169###), and neural machine translation (Bahdanau\net al., 2014 ###reference_b15###; Luong\net al., 2015 ###reference_b112###). In tasks with both visual and textual inputs, such as Visual Question Answering (VQA) (Lu\net al., 2016 ###reference_b108###; Yang\net al., 2016 ###reference_b191###; Yu\net al., 2019b ###reference_b196###; Ding\net al., 2023 ###reference_b46###; Cao\net al., 2023 ###reference_b26###) and image captioning (Xu et al., 2015 ###reference_b190###; Anderson et al., 2018 ###reference_b8###; Han\net al., 2020 ###reference_b64###), attention weights are extracted from both images and questions to identify the contributing features from both modalities. In the case of such multi-modal tasks, it is also important to boost the consistency between the attended image regions and sentence tokens for a plausible explanation. In recent years, different attention mechanisms have been proposed, including the self-attention mechanism (Vaswani et al., 2017 ###reference_b174###) and the co-attention mechanism for multi-modal inputs (Yu\net al., 2019b ###reference_b196###), aiming for better attention weights calculation that genuinely reflects the contributing factors to the final prediction.\nThough attention mechanisms have proved their effectiveness in performance increment in different tasks and have been used as the indicators of important features to explain the model\u2019s prediction results, there have always been debates arguing about the faithfulness of attention weights as the interpretation for neural networks.\nBai\net al. (2021 ###reference_b16###) proposed the concept of combinatorial shortcuts caused by the attention mechanism. It argued that the masks used to map the query and key matrices of the self-attention (Vaswani et al., 2017 ###reference_b174###) are biased, which would lead to the same positional tokens being attended regardless of the actual word semantics of different inputs. Clark\net al. (2019b ###reference_b36###) detected that the large amounts of attention of BERT (Devlin\net al., 2019a ###reference_b42###) focus on the meaningless tokens such as the special token [SEP]. Jain and Wallace (2019b ###reference_b82###) argued that the tokens with high attention weights are not consistent with the important tokens identified by the other interpretable methods, such as the gradient-based measures. Serrano and Smith (2019b ###reference_b154###) applied the method of intermediate representation erasure and claimed that attention can only indicate the importance of even intermediate components and are not faithful enough to explain the model\u2019s decision from the level of the actual inputs.\nIn contrast, Wiegreffe and\nPinter (2019b ###reference_b186###) proposed the work of \u2018Attention is not not explanation\u2019 specifically against the arguments in (Jain and Wallace, 2019b ###reference_b82###), arguing that whether attention weights are faithful explanations is dependent on the definition of explanation and conducted four different experiments to prove when attention can be used as the explanation. A similar view is also proposed by Jacovi and\nGoldberg (2020b ###reference_b79###), illustrating that under some instances, attention maps over input can be considered as a faithful explanation, which can be verified by the erasure method (Arras et al., 2017 ###reference_b10###; Feng et al., 2018 ###reference_b57###), i.e. whether or not that erasing the attended tokens from inputs would change the prediction results.\nIn order to improve the faithfulness of attention as the explanation, some recent works have proposed different methods. For example, Bai\net al. (2021 ###reference_b16###) proposed a method of generating unbiased mask distribution by using random mask distributions to get attention weights through solely training the attention layers while fixing the other downstream parts of the model, which will therefore scale the attention weights towards tokens that are truly correlated with the predicted label. Chrysostomou and\nAletras (2021 ###reference_b34###) introduced three different task-scaling mechanisms that scaled over the word representations from different aspects before passing to the attention mechanism and claimed that such scaled word representations help to produce a more faithful attention-based explanation.\nOverall, the dilemma of using inputs with high attention weights as the explanation to a black-box model\u2019s decision is associated with the various definition and inconsistent evaluations of explanation faithfulness from different works. Jacovi and\nGoldberg (2020b ###reference_b79###) also proposed in their work that the possible approach to solving this issue is to construct a unified evaluation of the degree of faithfulness either from the level of a specific task or from the level of sub-spaces of the input space. Nevertheless, regardless of the debates over the faithfulness of attention, explanation by attention weights has a lower level of readability. Compared to rationale extraction works that explicitly force the consecutive rationales to be extracted for better comprehensibility, current works using attention as explanation neglect such interpretability aspect. Therefore, even in some cases where the input tokens with high attention weights could work as faithful explanations, it would be hard for non-experts to understand the explanation well with non-coherent highlighted tokens of the textual inputs. However, for the multimodal task such as the visual question answering, some works have attention maps over the images as the explanation (Luo\net al., 2020 ###reference_b111###) or the part of the explanations (Wu and Mooney, 2019 ###reference_b188###), the attended region are usually consecutive pixels of the images, which can be more straightforward to be understood by non-expert users compared to the attention map over pure texts." |
| }, |
| { |
| "section_id": "3.1.4", |
| "parent_section_id": "3.1", |
| "section_name": "3.1.4. Attribution Methods", |
| "text": "Another method of detecting important input features that contribute most to a specific prediction is attribution methods, which aim to interpret prediction outputs by examining the gradients of a model. Common attribution methods include DeepLift (Shrikumar\net al., 2017 ###reference_b158###), Layer-wise relevance propagation (LRP) (Bach et al., 2015 ###reference_b14###), deconvolutional networks (Zeiler\net al., 2010 ###reference_b197###) and guided back-propagation (Springenberg et al., 2015 ###reference_b162###).\nExtracting model gradients allows for identifying high-contributing input features to a given prediction. However, directly extracting gradients does not work well with regards to two key properties: sensitivity and implementation invariance. Sensitivity emphasizes that if we have two inputs with one differing feature that lead to different predictions, this differing feature should be noted as important to the prediction. Implementation invariance means that the outputs of two models should be equivalent if they are functionally equivalent, whether their implementations are the same or not. Focusing on these properties, Sundararajan\net al. (2017 ###reference_b168###) proposed an integrated gradient method. Integrated gradients are the accumulative gradients of all points on a straight line between an input and a baseline point (e.g. a zero-word embedding). He\net al. (2019 ###reference_b68###) applied this method to natural machine translation to find the contribution of each input word to each output word. Here, the baseline input is a sequence of zero embeddings in the same length as the input to be translated. Mudrakarta et al. (2018 ###reference_b122###) applied integrated gradients to a question-answering task to identify the critical words in questions and found that only a few words in a question contribute to the model answer prediction.\nBesides extracting the gradients, scoring input contributions based on the model\u2019s hidden states is also used for attribution. For example, Du\net al. (2019b ###reference_b49###) proposed a post-hoc interpretable method that leaves the original training model untouched by examining the hidden states passed along by RNNs. Ding\net al. (2017 ###reference_b45###) applied LRP (Bach et al., 2015 ###reference_b14###) to neural machine translation to provide interpretations using the hidden state values of each source and target word.\nThe attribution methods are the preliminary approaches for deep learning researchers to explain the neural networks through the identified input features with outstanding gradients. The idea of the attribution methods were mostly proposed before the mature development and vast researches of rationale extraction, attention mechanisms and even the input perturbation methods. Compared to the other input feature explanation methods, the attribution methods hardly consider the interpretation\u2019s faithfulness and comprehensibility as the other three input feature explanation methods. Visualizing the identified features from inputs would be at the same plausible level as that of the other three feature importance methods to non-expert users, but the attribution methods do not work to form the interpretation into coherent sub-phrases for better readability and easier understanding. Thus, compared to rationale extraction, attention weights extraction and input perturbation, using attribution methods to generate the interpretation is more like a diagnosis method for deep learning experts to understand the model\u2019s decision and learn the model\u2019s functionality." |
| }, |
| { |
| "section_id": "3.1.5", |
| "parent_section_id": "3.1", |
| "section_name": "3.1.5. Datasets", |
| "text": "Tasks used for examining the interpretable methods discussed above include sentiment analysis, reading comprehension, natural machine translation, question answering and visual question answering. Below we list and summarise some common datasets that are used for these tasks:\nBeerAdvocate review dataset (McAuley\net al., 2012 ###reference_b118###) is a multi-aspect sentiment analysis dataset which contains around 1.5 million beer reviews written by online users. The average length of each review is about 145 words. These reviews are associated with the overall review of the beer or a particular aspect, such as the appearance, smell, palate and taste. Each written review also has a corresponding overall rating for beer and another four different ratings for the four review aspects, where each rating ranges from 0 to 5.\nIMDB (Maas\net al., 2011 ###reference_b113###) is a large movie review usually used for binary sentiment classification. The dataset contains 50k reviews labelled as positive or negative and is split in half into train and test sets. The average length for each review is 231 words and 10.7 sentences.\nWMT is a workshop for natural machine translation. Tasks announced in these workshops include translation of different language pairs, such as French to English, German to English and Czech to English in WMT14, and Chinese to English additionally added in WMT17. The sources are normally news and biomedical publications. For many papers examining interpretable methods, the commonly used datasets are French to English news and Chinese to English news.\nHotpotQA (Yang et al., 2018 ###reference_b192###) is a multi-hop QA dataset that contains 113K Wikipedia-based question-answer pairs where multiple documents are supposed to be used to answer each question. Apart from questions and answers, the dataset also contains sentence-level supporting facts for each document. This dataset is often used to experiment with interpretable methods for identifying sentence-level significant features for answer prediction.\nSQuAD (Rajpurkar et al., 2016 ###reference_b143###) is a reading comprehension dataset that contains 100k question-answer pairs from Wikipedia articles. SQuAD v2 (Rajpurkar\net al., 2018 ###reference_b142###) proposed in 2018 includes around 50K additional unanswerable questions to find the answerable questions with similar semantic meanings.\nVQA datasets are used for multi-modal tasks with both textual and visual inputs. VQA v1 (Antol et al., 2015 ###reference_b9###) is the first visual question-answering dataset. VQA v1 contains 204,721 images, 614,163 questions and 7,964,119 answers, where most images are authentic images extracted from MS COCO dataset (Lin et al., 2014 ###reference_b100###) and 50,000 images are newly generated abstract scenes of clipart objects. VQA v2 (Goyal et al., 2017 ###reference_b60###) is an improved version of VQA v1 that mitigates the biased question problem and contains 1M pairs of images and questions as well as ten answers for each question. Work on VQA commonly utilises attention weight extraction as a local interpretation method." |
| }, |
| { |
| "section_id": "3.2", |
| "parent_section_id": "3", |
| "section_name": "3.2. Natural Language Explanation", |
| "text": "Natural Language Explanation (NLE) refers to the method of generating free text explanations for a given pair of inputs and their prediction. In contrast to rational extraction, where the explanation text is limited to that found within the input, NLE is entirely freeform, making it an incredibly flexible explanation method. This has allowed it to be applied to tasks outside of NLP, including reinforcement learning (Ehsan\net al., 2018 ###reference_b51###), self-driving cars (Kim\net al., 2018 ###reference_b88###), and solving mathematical problems (Ling\net al., 2017 ###reference_b102###). We focus here on methods in which explanations are generated without any or minimal scaffolding, that is, we do not cover methods that form \u2018natural language explanations\u2019 by filling in templates, but rather cases where the explanation model is tasked with generating the entirety of the explanation content itself." |
| }, |
| { |
| "section_id": "3.2.1", |
| "parent_section_id": "3.2", |
| "section_name": "3.2.1. Multimodal NLE", |
| "text": "Multimodal NLE focuses on generating natural language explanations for tasks that involve multiple input modalities, including images and video. While explanations may span multiple modalities, we focus on cases where the explanations significantly involve natural language. Much work, including text-only NLE, stems from Hendricks et al. (2016 ###reference_b69###), which draws upon image captioning research to generate explanations for image classification predictions of bird images. The model first makes a prediction using an image classification network, and then the features from the final layers of the network are fed into an LSTM decoder (Hochreiter and\nSchmidhuber, 1997 ###reference_b74###) to generate the explanation text. The explanation is trained with a reinforcement learning-based approach both to match a ground truth correction and to be able to be used to predict the image label itself. Later work has directly built on this model by improving the use of image features used during the explanation generation (Wickramanayake\net al., 2019 ###reference_b182###), using a critic model to improve the relevance of the explanations (Hendricks\net al., 2018 ###reference_b70###), and conditioning on specific image attributes (ul Hassan et al., 2019 ###reference_b173###). Park et al. (2018 ###reference_b128###) make use of an attention mechanism to augment the text-only explanations with heatmap-based explanations and find that training a model to provide both types of explanations improves the quality of both the text and visual-based explanations. Most of these earlier approaches use learnt LSTM decoders to generate the explanations, learning a language generation module from scratch. Most of these methods generate their explanations post-hoc, making a prediction before generating an explanation. This means that while the explanations may serve as valid reasons for the prediction, they may also not truthfully reflect the reasoning process of the model itself. Wu and Mooney (2019 ###reference_b188###) attempt to build a multimodal model whose explanations better match the model\u2019s reasoning process by training the text generator to generate explanations that can be traced back to objects used for prediction in the image as determined by gradient-based attribution methods. They explicitly evaluate their model\u2019s faithfulness using LIME and human evaluation and find that this improves performance and does indeed result in explanations faithful to the gradient-based explanations.\nMore recently, NLE datasets have been developed for VQA (Huk Park et al., 2018 ###reference_b75###), self-driving car decisions (Kim\net al., 2018 ###reference_b88###), arcade game agents (Ehsan et al., 2019 ###reference_b52###), visual commonsense (Zellers\net al., 2019 ###reference_b198###), physical commonsense (Rajani et al., 2020 ###reference_b141###), image manipulation detection (Da et al., 2021 ###reference_b39###), explaining facial biometric scans (Mirzaalian et al., 2021 ###reference_b120###), as well as for more general vision-language benchmarks (Kayser et al., 2021 ###reference_b87###).\nThe recent rise of large pretrained language models (Peters et al., 2018a ###reference_b131###; Devlin\net al., 2019b ###reference_b43###; Radford et al., 2019 ###reference_b137###) has also impacted multimodal NLE, with recent approaches replacing the standard LSTM-based decoder with pretrained text generation models such as GPT-2 (Marasovi\u0107 et al., 2020 ###reference_b117###; Kayser et al., 2021 ###reference_b87###; Ayyubi et al., 2020 ###reference_b13###) with a good deal of success. Kayser et al. (2021 ###reference_b87###) additionally finds that using a pre-trained unified vision-language model along with GPT-2 works best over other combinations of vision and language-only models. This suggests that further utilising the growing number of large pre-trained multimodal models such as VLBERT (Su\net al., 2020 ###reference_b167###), UNITER (Chen et al., 2020 ###reference_b33###), or MERLOT (Zellers et al., 2021 ###reference_b199###) may lead to improved explanations for multimodal tasks. However, while these models often do yield higher-quality explanations that better align with human preferences, the use of large unified transformer models means that the faithfulness of these explanations in representing the reasoning process of the model is hard to determine, as the exact reasoning processes used by these large models is hard to uncover." |
| }, |
| { |
| "section_id": "3.2.2", |
| "parent_section_id": "3.2", |
| "section_name": "3.2.2. Text-only NLE", |
| "text": "Earlier work examining explanations accompanying NLP tasks largely examined integrating them as inputs for fact-checking, concept learning, and relation extraction (Srivastava\net al., 2017 ###reference_b163###; Hancock et al., 2018 ###reference_b65###; Alhindi\net al., 2018 ###reference_b5###). These efforts provided useful datasets for examining natural language explanations, but the first work examining generating natural language explanations for NLP tasks in an automated fashion was done by Camburu et al. (2018 ###reference_b24###), using a set of explanations gathered for the SNLI dataset (Bowman\net al., 2015 ###reference_b22###) called e-SNLI. Similar to the multimodal models discussed above, the baseline models for e-SNLI proposed in Camburu et al. (2018 ###reference_b24###) are made up of two parts: a predictor module and an explanation module, with the best performing model first generating explanations and then using these explanations to make predictions. While this tighter integration of explanation generation into the overall model may suggest more faithful and higher-quality explanations, Camburu et al. (2020 ###reference_b25###) shows that this model can still provide explanations that are inconsistent with their predictions, suggesting that either the explanations are faulty or the model uses a flawed decision-making process. Several works try to improve the faithfulness of such models by using generated explanations as inputs to the final predictor model (Kumar and\nTalukdar, 2020 ###reference_b92###; Zhao and\nVydiswaran, 2021 ###reference_b202###; Zhou et al., 2020 ###reference_b203###; Rajani\net al., 2019 ###reference_b140###). By \u2018explaining then predicting\u2019, the explanations are by construction used as part of the prediction process. This may aid overall model performance by exposing latent aspects of the task (Hase and Bansal, 2022 ###reference_b66###). Inoue et al. (2021 ###reference_b77###) additionally shows summarisation models can be trained to serve as explanation generation models for this construction. However, recent work Wiegreffe\net al. (2021 ###reference_b184###) suggests that jointly producing explanations actually results in models with a stronger correlation between the predicted label and explanation, suggesting these models are more faithful than explain-then-predict methods despite the different construction. Further evaluation linking the underlying model\u2019s predictive mechanics with the generated explanations (e.g. Prasad et al. (2021 ###reference_b135###) for highlighted rationales) may work to investigate further how much these explanations align with the underlying model.\nBeyond NLI, other early tasks to which NLE was applied include commonsense QA (Rajani\net al., 2019 ###reference_b140###) and user recommendations (Ni et al., 2019 ###reference_b126###). While early work used human-collected explanations, Ni et al. (2019 ###reference_b126###) shows that using distant supervision via rationales can also work well for training explanation-generating models. Li et al. (2021 ###reference_b96###) additionally embed extra non-text features (i.e., user id, item id) by using randomly initialised token embeddings. This provides a way to integrate non-text features besides the use of large pre-trained multimodal models.\nMuch like multimodal NLE, large pre-trained language models have also been integrated into text-based NLE tasks, and most recent papers make use of these models in some way. Rajani\net al. (2019 ###reference_b140###) introduce an NLE dataset for commonsense QA (\u2018cos-e\u2019) and use a pre-trained GPT model (Radford and\nNarasimhan, 2018 ###reference_b136###) to generate explanations used to make a final prediction. More recently, wT5 (Narang et al., 2020 ###reference_b125###), which follows the T5 model (Raffel et al., 2020 ###reference_b138###) in framing explanation generation and prediction as a purely text-to-text task, generates the prediction followed by a text explanation. More recent work has shown that using these models allows good explanation generation (and even may improve performance) for tasks and settings with little data (Erliksson et al., 2021 ###reference_b54###; Marasovic et al., 2022 ###reference_b116###; Jang and\nLukasiewicz, 2021 ###reference_b83###; Yordanov et al., 2021 ###reference_b193###). Automatically collecting explanations from existing datasets or generating explanations using existing models can also provide extra supervision for learning to generate NLEs in limited-data settings (Brahman\net al., 2021 ###reference_b23###). This highlights the strength of NLEs: by framing the explanation as a text generation problem, explanation generation is as simple as finetuning or even few-shot prompting a large language model to produce explanations, often with fairly good results. However, while these approaches are often impressive, generated explanations can still \u2018hallucinate\u2019 data not actually present in the training or input data and fail to generalise to challenging test sets such as HANS (Zhou and Tan, 2021 ###reference_b205###)." |
| }, |
| { |
| "section_id": "3.2.3", |
| "parent_section_id": "3.2", |
| "section_name": "3.2.3. NLE in Dialog", |
| "text": "While the above work has all assumed a setup where a model is able to generate only one explanation and has no memory of previous interactions with a user, some work has examined dialog-based setups where a user is assumed to repetitively interact with a model. Madumal et al. (2018 ###reference_b114###) propose a model for the components of an explanation dialog comprising of two sections: an explanation dialog, which consists mainly of presenting and accepting explanations; and an argument dialog, where the provided explanation is challenged with an argument. Rebanal\net al. (2021 ###reference_b145###) draw on QA systems to design a model for explaining basic algorithms, presenting the model as an \u2018interactive dialog that allows users to ask for specific kinds of explanations they deem useful\u2019. More recently, Li\net al. (2022 ###reference_b98###) use feedback from users as explanations to supervise and improve an open domain QA model, showing how models can improve by taking into account live feedback from users. Given the success of using human-written instructions to train large models (Sanh et al., 2022 ###reference_b152###; Wei et al., 2022 ###reference_b181###), making further use of human feedback to improve and guide the way explanations are generated may further improve the quality and utility of NLEs." |
| }, |
| { |
| "section_id": "3.2.4", |
| "parent_section_id": "3.2", |
| "section_name": "3.2.4. Datasets", |
| "text": "There are a number of NLE datasets for NLP tasks, which we summarise in Table 1 ###reference_###. Many of these datasets consist of human-generated explanations applied to existing datasets, or make use of some automatic extraction method to retrieve explanations from supporting documents. While most datasets simply present one explanation per input sample, others present setups where multiple explanations are attached to each sample, but only one is valid (Wang\net al., 2019a ###reference_b177###; Zhang\net al., 2020 ###reference_b200###). Wiegreffe and\nMarasovic (2021 ###reference_b183###) also summarise existing NLE-for-NLP datasets, focussing also on text-based rationale and structured explanation datasets. We also provide a list of datasets for multimodal NLE in Table 2 ###reference_###." |
| }, |
| { |
| "section_id": "3.2.5", |
| "parent_section_id": "3.2", |
| "section_name": "3.2.5. Challenges and Future work", |
| "text": "NLE is very attractive as a human-comprehensible approach to interpretation: rather than trying to utilise model parameters, NLE-based approaches essentially allow their models to \u2018talk for themselves\u2019. Despite being freely generated, these explanations still display a degree of faithfulness in their agreement with gradient-based explanation methods and can be quite robust to noise (Wiegreffe\net al., 2021 ###reference_b184###). This suggests that this approach exhibits a degree of faithfulness and stability despite a lack of formal guarantee that these methods have either quality. Furthermore, pipeline methods that use explanations for predictions can further guarantee that the generated explanations represent the information being used for prediction, even if their performance suffers compared to joint prediction models. NLEs have the benefit of being extremely comprehensible: unlike text rationales or gradient methods, which often require some understanding of the model being used, natural language explanations can be easily read and understood by anyone, and tailoring explanations to a specific audience is \u2018simply\u2019 a matter of training a model on similar explanations, which is even possible in low-data scenarios (Erliksson et al., 2021 ###reference_b54###; Marasovic et al., 2022 ###reference_b116###; Jang and\nLukasiewicz, 2021 ###reference_b83###; Yordanov et al., 2021 ###reference_b193###). Finally, the trustworthiness of NLE methods is not often explicitly evaluated. The focus has been put on the overall \u2018explanation quality\u2019 when evaluating NLEs (Inoue et al., 2021 ###reference_b77###; Clinciu\net al., 2021 ###reference_b37###). While rating \u2018explanation quality\u2019 may in some ways suggest how trustworthy the annotators find the explanations, more careful consideration of the type of contract-based trust (Jacovi et al., 2021 ###reference_b80###) an NLE-based model may involve is required in determining the utility of deploying these models in real-world scenarios.\nOverall, NLE is a very flexible and attractive explanation method, with the potential to greatly improve model explainability without requiring complex setups: just train your model to output explanations (Narang et al., 2020 ###reference_b125###). However, evaluation must be carefully considered due to issues with automated metrics (Clinciu\net al., 2021 ###reference_b37###) and the human-generated explanations themselves (Carton\net al., 2020 ###reference_b27###). In addition, further exploring the link between generated NLEs and other explanation or interpretability methods may further yield insights into models and improve our understanding of the faithfulness of this method.\nfor tree=\nforked edges,\nfont=,\ndraw,\ngrow\u2019 = 0,\nsemithick,\nrounded corners,\ntext width = 2.3cm,\ns sep = 6pt,\nnode options = align = center,\ncalign=child edge,\ncalign child=(n_children()+1)/2\n\n[Probing, fill=gray!35\n[Embedding Probes, for tree=fill=cyan!35\n[Word Embedding\n[Rubinstein et al. (2015 ###reference_b150###);\nK\u00f6hn (2015 ###reference_b90###);\nGupta\net al. (2015 ###reference_b62###);\nSommerauer and\nFokkens (2018 ###reference_b160###)\n]\n]\n[Sentence Embedding\n[Ettinger\net al. (2016 ###reference_b55###);\nGupta\net al. (2015 ###reference_b62###);\nAdi et al. (2016 ###reference_b3###);\nConneau et al. (2018 ###reference_b38###);\nSorodoc\net al. (2020 ###reference_b161###)\n]\n]\n]\n[Model Probes, for tree=fill=purple!35\n[NMT Models\n[Shi\net al. (2016 ###reference_b157###);\nBelinkov et al. (2017a ###reference_b19###);\nBelinkov et al. (2017b ###reference_b21###);\nRaganato and\nTiedemann (2018 ###reference_b139###);\nDalvi et al. (2019 ###reference_b40###)\n]\n]\n[Language Models\n[Hupkes\net al. (2018 ###reference_b76###);\nGiulianelli et al. (2018 ###reference_b58###);\nJumelet and\nHupkes (2018 ###reference_b85###);\nZhang and Bowman (2018 ###reference_b201###);\nSorodoc\net al. (2020 ###reference_b161###);\nBelinkov and\nGlass (2019 ###reference_b20###);\nPrasad and Jyothi (2020 ###reference_b134###)\n]\n]\n[Deep Pre-trained Models\n[Peters\net al. (2018c ###reference_b130###);\nClark\net al. (2019a ###reference_b35###);\nClark\net al. (2019a ###reference_b35###);\nLin\net al. (2019 ###reference_b101###);\nHewitt and\nManning (2019 ###reference_b73###);\nPeters et al. (2018b ###reference_b132###);\nLiu\net al. (2019a ###reference_b105###);\nTenney et al. (2019 ###reference_b170###);\nKlafka and\nEttinger (2020 ###reference_b89###)\n]\n]\n]\n]" |
| }, |
| { |
| "section_id": "3.3", |
| "parent_section_id": "3", |
| "section_name": "3.3. Probing", |
| "text": "Linguistic probes, also referred to as \u2018diagnostic classifiers\u2019 (Hupkes\net al., 2018 ###reference_b76###) or \u2018auxiliary tasks\u2019 (Adi et al., 2016 ###reference_b3###), are a post-hoc method for examining the information stored within a model. Specifically, the probes themselves are (often small) classifiers that take as input some hidden representations (either intermediate representations within a model or word embeddings) and are trained to perform some small linguistic task, such as verb-subject agreement (Giulianelli et al., 2018 ###reference_b58###) or syntax parsing (Hewitt and\nManning, 2019 ###reference_b73###). Intuition follows that if there is more task-relevant information present within the hidden representations, the classifier will perform better, thus allowing researchers to determine the presence or lack of presence of linguistic knowledge within both word embeddings and at various layers within a model. However, recent research (Hewitt and\nManning, 2019 ###reference_b73###; Pimentel et al., 2020 ###reference_b133###; Ravichander\net al., 2021 ###reference_b144###) has shown that probing experiments require careful design and consideration of truly faithful measurements of linguistic knowledge.\nWhile current probing methods do not provide layperson-friendly explanations, they do allow for research into the behaviour of popular models, allowing a better understanding of what linguistic and semantic information is encoded within a model (Lin\net al., 2019 ###reference_b101###). Hence, the target audience of a probe-based explanation is not a layperson, as is the case with other interpretation methods discussed in this paper, but rather an NLP researcher or ML practitioner who wishes to gain a deeper understanding of their model. Note we do not provide a list of common datasets in this section, unlike the previous sections, as probing research has largely not focused on any particular subset of datasets and can be applied to most text-based tasks." |
| }, |
| { |
| "section_id": "3.3.1", |
| "parent_section_id": "3.3", |
| "section_name": "3.3.1. Embedding Probes", |
| "text": "Early work on probing focused on using classifiers to determine what information could be found in distributional word embeddings (Mikolov et al., 2013 ###reference_b119###; Pennington\net al., 2014 ###reference_b129###). For example, Rubinstein et al. (2015 ###reference_b150###); K\u00f6hn (2015 ###reference_b90###); Gupta\net al. (2015 ###reference_b62###) all investigated the information captured by word embedding algorithms through the use of simple classifiers (e.g. linear or logistic classifiers) to predict properties of the embedded words, such as part-of-speech or entity attributes (e.g. the colour of the entity referred to by a word). These works all found word embeddings captured the properties probed for, albeit to varying extents. More recently, Sommerauer and\nFokkens (2018 ###reference_b160###) used both a logistic classifier and a multi-layer perceptron (MLP) to determine the presence of certain semantic information in Word2Vec embeddings, finding that visual properties (e.g. colour) were not represented well, while functional properties (e.g. \u2018is dangerous\u2019) were. Research into distributional models has reduced currently due to the rise of pre-trained language models such as BERT (Devlin\net al., 2019a ###reference_b42###).\nAlongside word embeddings, sentence embeddings have also been the target of analysis via probing. Ettinger\net al. (2016 ###reference_b55###) (following Gupta\net al. (2015 ###reference_b62###)) trains a logistic classifier to classify if a sentence embedding contains specific words and specific words with specific semantic roles. Adi et al. (2016 ###reference_b3###) trains MLP classifiers on sentence embeddings to determine if the embeddings contain information about sentence length, word content, and word order. They examine LSTM auto-encoder, continuous bag-of-words (CBOW), and skip-thought embeddings, finding that CBOW is surprisingly effective at encoding the properties of sentences examined in low dimensions, while the LSTM auto-encoder-based embeddings perform very well, especially with a larger number of dimensions. Further developing on this work, Conneau et al. (2018 ###reference_b38###) proposes ten different probing tasks, covering semantic and syntactic properties of sentence embeddings and controlling for various cues that may allow a probe to \u2018cheat\u2019 (e.g. lexical cues). In order to determine if encoding these properties aids models in downstream tasks, the authors also measure the correlation between probing task performance and performance on a set of downstream tasks. More recently, Sorodoc\net al. (2020 ###reference_b161###) proposes 14 additional new probing tasks for examining information stored in sentence embeddings relevant to relation extraction." |
| }, |
| { |
| "section_id": "3.3.2", |
| "parent_section_id": "3.3", |
| "section_name": "3.3.2. Model Probes", |
| "text": "Following the work on probing distributional embeddings, Shi\net al. (2016 ###reference_b157###) extended probing to NLP models, training a logistic classifier on the hidden states of LSTM-based neural machine translation (NMT) models to predict various syntactic labels. Similarly, they train various decoder models to generate a parse tree from the encodings provided by these models. By examining the performance of these probes on different hidden states, they find that lower-layer states contain more fine-grained word-level syntactic information, while higher-layer states contain more global and abstract information. Following this, Belinkov et al. (2017a ###reference_b19###) and Belinkov et al. (2017b ###reference_b21###) both examine NMT models with probes in more detail, uncovering various insights about the behaviour of NMT models, including a lack of powerful representations in the decoder, and that the target language of a model has little effect on the source language representation quality. Rather than a logistic classifier, both papers use a simple neural network with one hidden layer and a ReLU non-linearity, due to this reporting similar trends as a more simple classifier but with better performance. More recently, Raganato and\nTiedemann (2018 ###reference_b139###) analysed transformer-based NMT models using a similar probing technique alongside a host of other analyses. Finally, Dalvi et al. (2019 ###reference_b40###) presented a method for extracting salient neurons from an NMT model by utilising a linear classifier, allowing examination of not just information present within a model but also what parts of the model contribute most to both specific tasks and the overall performance of the model.\nProbing is not limited to NMT, however: research has also turned to examining the linguistic information encoded by language models. Hupkes\net al. (2018 ###reference_b76###) utilised probing methods to explore how well an LSTM model for solving basic arithmetic expressions matches the intermediate results of various solution strategies, thus examining how LSTM models break up and solve problems with nested structures. Utilising the same method, Giulianelli et al. (2018 ###reference_b58###) investigated how LSTM-based language models tracked agreement. The authors trained their probe (a linear model) on the outputs of an LSTM across timesteps and components of the model, showing how the information encoded by the LSTM model changes over time and in model parts. Jumelet and\nHupkes (2018 ###reference_b85###); Zhang and Bowman (2018 ###reference_b201###) also probe LSTM-based models for particular linguistic knowledge, including NPI-licensing and CCG tagging. Importantly, the authors find that even untrained LSTM models contain information probe-based models can exploit to memorise labels for particular words, highlighting the need for careful control of probing tasks (we discuss this further in the next section). More recently, Sorodoc\net al. (2020 ###reference_b161###) probe LSTM and transformer-based language for referential information. We also note that probing has been applied to speech processing-based models (Belinkov and\nGlass, 2019 ###reference_b20###; Prasad and Jyothi, 2020 ###reference_b134###).\nFinally, probing-based analyses of deep pre-trained language models have also been popular as a method for understanding how these models internally represent language. Peters\net al. (2018c ###reference_b130###) briefly utilised linear probes to investigate the presence of syntactic information in bidirectional LSTM models, finding that POS tagging is learnt in lower layers than constituent parsing. Recently, both Lin\net al. (2019 ###reference_b101###) and Clark\net al. (2019a ###reference_b35###) used probing classifiers to investigate the information stored in BERT\u2019s hidden representations across both layers and heads. Clark\net al. (2019a ###reference_b35###) focused on attention, using a probe trained on attention weights in BERT to examine dependency information, while Lin\net al. (2019 ###reference_b101###) focused on examining syntactic and positional information across layers. Hewitt and\nManning (2019 ###reference_b73###) examined representations generated by ELMo (Peters et al., 2018b ###reference_b132###) and BERT, training a small linear model to predict the distance between words in a parse tree of a given sentence. Liu\net al. (2019a ###reference_b105###) proposed and examined sixteen different probing tasks, involving tagging, segmentation, and pairwise relations, utilising a basic linear model. They compared results across several models, including BERT and ELMo, examining the performance of the models on each task across layers. Tenney et al. (2019 ###reference_b170###) trained two-layer MLP classifiers to predict labels for various NLP tasks (POS tagging, named entity labelling, semantic role labelling, etc.), using the representations generated by four different contextual encoder models. They found that the contextualised models improve more on syntactic tasks than semantic tasks when compared to non-contextual embeddings and found some evidence that ELMo does encode distant linguistic information. Klafka and\nEttinger (2020 ###reference_b89###) investigated how much information about surrounding words can be found in contextualised word embeddings, training MLP classifiers to predict aspects of important words within the sentence, e.g. predicting the gender of a noun from an embedding associated with a verb in the same sentence." |
| }, |
| { |
| "section_id": "3.3.3", |
| "parent_section_id": "3.3", |
| "section_name": "3.3.3. Probe Considerations and Limitations", |
| "text": "The continued growth of probing-based papers has also led to recent work examining best practices for probes and how to interpret their results. Hewitt and Liang (2019 ###reference_b72###) considered how to ensure that a probe is genuinely reflective of the underlying information present in a model and proposed the use of a control task, a randomised version of a probe task in which high performance is only possible by memorisation of inputs. Hence, a faithful probe should perform well on a probe task and poorly on a corresponding control task if the underlying model does indeed contain the information being probed for. The authors found that most probes (including linear classifiers) are over-parameterised, and discuss methods for constraining complex probes (e.g. multilayer perceptrons) to improve faithfulness while still allowing them to achieve similar results.\nWhile most papers we have discussed above follow the intuition that probes should avoid complex probes to prevent memorisation, Pimentel et al. (2020 ###reference_b133###) suggest that instead the probe with the best score on a given task should be chosen as the tightest estimate, since simpler models may simply be unable to extract the linguistic information present in a model, and such linguistic information cannot be \u2018added\u2019 by more complex probes (since their only input are hidden representations). In addition, the authors argue that memorisation is an important part of linguistic competence, and as such probes should not be artificially punished (via control tasks) for doing this. Recent work has also presented methods that avoid making assumptions about probe complexity, such as MDL probing (Voita and Titov, 2020 ###reference_b176###; Lovering\net al., 2021 ###reference_b107###), which directly measures \u2018amount of effort\u2019 needed to achieve some extraction task, or DirectProbe (Zhou and Srikumar, 2021 ###reference_b204###), which directly examines intermediate representations of models to avoid having to deal with additional classifiers.\nFinally, Hall Maudslay et al. (2020 ###reference_b63###) compared the structural probe (Hewitt and\nManning, 2019 ###reference_b73###) with a lightweight dependency parser (both given the same inputs) and demonstrated that the parser is generally able to extract more syntactic information from BERT embedding. In contrast, the probe performs better with a different metric, showing that the choice of metric is important for probes: when testing for evidence of linguistic information, one should consider not only the nature of the probe but also the metric used to evaluate it. Furthermore, the significance of well-performing probes is not clear: models may encode linguistic information not actually used by the end-task (Ravichander\net al., 2021 ###reference_b144###), showing that the presence of linguistic information does not imply it is being used for prediction. Some approaches proposed later that integrated the causal approaches such as amnesiac probing (Elazar\net al., 2021 ###reference_b53###), which directly intervene in the underlying model\u2019s representations, might be a possible solution to distinguish between these cases." |
| }, |
| { |
| "section_id": "3.3.4", |
| "parent_section_id": "3.3", |
| "section_name": "3.3.4. Interpretability of Probes and Future Work", |
| "text": "As noted at the beginning of the section, probing is a way for NLP researchers to investigate models rather than end-users. As such, their comprehensibility is relatively low: understanding probing results requires understanding the linguistic properties they are probing and the more complex experimental setups they make use of (as simple metrics such as task accuracy do not show the whole story (Hewitt and Liang, 2019 ###reference_b72###)). However, probes are naturally fairly faithful in that they directly use the model\u2019s hidden states and are specifically designed to represent only information present within these hidden states. This faithfulness is degraded somewhat by the fact that this information may not be used for predictions (Ravichander\net al., 2021 ###reference_b144###), but recent causal approaches work towards alleviating this. This also suggests that probing results could be considered trustworthy only when the experimental design is carefully considered, in that their results can only be relied upon if carefully controlled. Finally, probing methods are often reasonably stable for the same model and property, as the probe classifier is trained to some convergence. However, across models (even those with the same architecture but just trained on different data), results can differ quite drastically (Elazar\net al., 2021 ###reference_b53###), which shows differences between pre-trained and finetuned BERT models. This is more likely to be a function of the underlying models rather than the technique, but also shows that probing results are very specific to the models and properties being examined.\nOverall, probes are exciting and valuable tools for investigating models\u2019 \u2018inner workings\u2019. However, much like other explanation methods, the setup and evaluation of probing techniques must be carefully considered. Some future works of the probing may be associated with the integration of some causal methods (Elazar\net al., 2021 ###reference_b53###) as better approaches to make stronger statements about what a model is and isn\u2019t using for its predictions, better allowing probing to provide explanations for model judgements rather than just show what could be potentially used. Combining this with methods that further reduce the complexity of probing setups (Zhou and Srikumar, 2021 ###reference_b204###) may allow even simpler and better ways to get insights into NLP models. Causal models have been applied to the traditional predictive tasks and covered with the convergence of causal inference and language processing (Feder\net al., 2022 ###reference_b56###). Recent NLP works have tried to involve auxiliary causal-based approaches in their models (Feder\net al., 2022 ###reference_b56###; Heskes\net al., 2020 ###reference_b71###). Such involvement of causal approaches can be seen as a future trend of interpretable NLP tasks including probing. However, the essence of causal models is different from the association essence of neural networks. Thus, we consider a detailed discussion of causal approaches is out of the scope of this survey. But we do notice that this could be a future trend for the further development of probing." |
| }, |
| { |
| "section_id": "4", |
| "parent_section_id": null, |
| "section_name": "4. Evaluation Methods", |
| "text": "" |
| }, |
| { |
| "section_id": "4.1", |
| "parent_section_id": "4", |
| "section_name": "4.1. Evaluation of Feature Importance", |
| "text": "" |
| }, |
| { |
| "section_id": "4.1.1", |
| "parent_section_id": "4.1", |
| "section_name": "4.1.1. Automatic Evaluation", |
| "text": "Evaluations on the interpretable methods of extracting important features usually align with the evaluation of the explanation faithfulness, i.e. whether the extracted features are sufficient and accurate enough to result in the same label prediction as the original inputs. When the datasets come with pre-annotated explanations, the extracted features used as the explanation can be compared with the ground truth annotation through exact matching or soft matching. The exact matching only considers the validness of the explanation when it is exactly the same as the annotation, and such validity is quantified through the precision score. For example, the HotpotQA dataset provides annotations for supporting facts, allowing a model\u2019s accuracy in reporting these supporting facts to be easily measured. This is commonly used for extracting rationals, where the higher the precision score, the better the model matches human-annotated explanations, likely indicating improved interpretability. On the country, soft matching will take the extracted features as a valid explanation if some features (tokens/phrases in the case of NLP) matched with the annotation. For instance, DeYoung et al. (2020 ###reference_b44###) proposed Intersection-Over-Union (IOU) on the token level, taking the overlap size of the tokens over two spans divided by the union of their token sizes and considering the extracted rationales as a valid explanation if the IOU score is over 0.5.\nHowever, DeYoung et al. (2020 ###reference_b44###) also argued that the matching between the identified features and the annotation only measures the plausibility of interpretability but not faithfulness. In other words, either the exact matching or soft matching can reveal if the model\u2019s decisions truly depend on the identified contributing features. Therefore, some other erasure-based metrics are specifically proposed to evaluate the impact of the identified important features to the model\u2019s results. For example, Du\net al. (2019b ###reference_b49###) proposed a faithfulness score to verify the importance of the identified contributing sentences or words to a given model\u2019s outputs. It is assumed that the probability values for the predicted class will significantly drop if the truly important inputs are removed. The score is calculated as in equation 1 ###reference_###:\nwhere is the predicted probability for a given target class with original inputs and is the predicted probability for the target class for the input with significant sentences/words removed.\nThe Comprehensiveness score proposed by DeYoung et al. (2020 ###reference_b44###) in later years is calculated in the same way as the Faithfulness score (Du\net al., 2019b ###reference_b49###). What is to be noted here is that the Comprehensiveness score is not related to the evaluation of the comprehensibility of interpretability but to measure whether all the identified important features are needed to make the same prediction results. A high score implies the enormous influence of the identified features, while a negative score indicates that the model is more confident in its decision without the identified rationales. DeYoung et al. (2020 ###reference_b44###) also proposed a Sufficiency score to calculate the probability difference from the model for the same class once only the identified significant features are kept as the inputs. Thus, opposite to the Comprehensiveness score or Faithfulness score, a lower Sufficiency score indicates the higher faithfulness of the selected features.\nApart from using the above-proposed evaluation metrics, another direct way to evaluate the validity of the explanations for a model\u2019s output is to examine the performance decrease of a model based on the tasks standard performance evaluation metrics after removing or perturbing identified important input features (i.e. words/phrases/sentences). For example, He\net al. (2019 ###reference_b68###) measured the change in BLEU scores to examine whether certain input words were essential to the predictions in natural machine translation." |
| }, |
| { |
| "section_id": "4.1.2", |
| "parent_section_id": "4.1", |
| "section_name": "4.1.2. Human Evaluation", |
| "text": "Human evaluation is also a common and straightforward but relatively more subjective method for evaluating the validity of explanations for a model. This can be done by researchers themselves or by a large number of crowd-sourced participants (sourced from, e.g. Amazon Mechanical Turk). For example, Chen\net al. (2018 ###reference_b31###) asked Amazon Mechanical Turk workers to predict the sentiment based on predicted keywords in a text, examining the faithfulness of the selected features as interpretation. Sha\net al. (2021 ###reference_b155###) sampled 300 input-output-interpretation cases to ask the human evaluator to examine whether the selected features are useful (to explain the output), complete (enough to explain the output) and fluent to read).\nWhile faithfulness can be evaluated more easily via automatic evaluation metrics, the comprehensibility and trustworthiness of interpretations usually are evaluated through human evaluations in the current research works. Though using large numbers of participants helps remove the subjective bias, this requires the cost of setting up larger-scale experiments, and it is also hard to ensure that every participant understands the task and the evaluation criteria. It is undoubtedly that the human evaluation results can provide some hints about the interpretation validity and comprehensibility, but we cannot erase the suspicion of the existence of subjective bias, which also limits further references and fair comparison of the human evaluation results for future works." |
| }, |
| { |
| "section_id": "4.2", |
| "parent_section_id": "4", |
| "section_name": "4.2. Evaluation of NLE", |
| "text": "" |
| }, |
| { |
| "section_id": "4.2.1", |
| "parent_section_id": "4.2", |
| "section_name": "4.2.1. Automatic Evaluation", |
| "text": "As NLE involves generating text, the automatic evaluation metrics for NLE are generally the same metrics used in tasks with free-form text generation, such as machine translation or summarization. As such, standard automated metrics for NLE are BLEU (Papineni\net al., 2002 ###reference_b127###), METEOR (Denkowski and\nLavie, 2014 ###reference_b41###), ROUGE (Lin, 2004 ###reference_b99###), CIDEr (Vedantam et al., 2015 ###reference_b175###), and SPICE (Anderson et al., 2016 ###reference_b7###), with all five generally being reported in VQA-based NLE papers. Perplexity is also occasionally reported (Camburu et al., 2018 ###reference_b24###; Ling\net al., 2017 ###reference_b102###), keeping in line with other natural language generation-based works. However, these automated metrics must be used carefully, as recent work has found they often correlate poorly with human judgements of explanation quality. Clinciu\net al. (2021 ###reference_b37###) suggest that model-based scores such as BLUERT and BERTScore better correlate with human judgements, and Hase\net al. (2020 ###reference_b67###) point out that only examining how well the explanation output matches labels does not measure how well the explanations accurately reflect the model\u2019s behaviour.\nAdditionally, the quality of the annotated human explanations collected in datasets such as e-SNLI has also come into question. Carton\net al. (2020 ###reference_b27###) find that human-created explanations across several datasets perform poorly at metrics such as sufficiency and comprehensiveness, suggesting they do not contain all that is needed to explain a given judgement. This suggests that just improving our ability to compare generated explanations with human-generated ones may not be enough to best measure the quality of a given generated explanation, and further work in improving the gold annotations provided by explanation datasets could also help." |
| }, |
| { |
| "section_id": "4.2.2", |
| "parent_section_id": "4.2", |
| "section_name": "4.2.2. Human Evaluation", |
| "text": "Given the limitations of current automatic evaluation methods, and the free-form nature of NLE, human evaluation is always necessary to truly judge explanation quality. Such evaluation is most commonly done by getting crowdsourced workers to rate the generated explanations (either just as correct/not correct or on a point scale), which allows easy comparison between models. In addition, Liu\net al. (2019b ###reference_b104###) uses crowdsourced workers to compare their model\u2019s explanations against another, with workers noting which model\u2019s explanation related best to the final classification results. Considering BLEU and similar metrics do not necessarily correlate well with human intuition, all work on NLE should include human evaluation results to some level, even if the evaluation is limited (e.g. just on a sample of generated explanations)." |
| }, |
| { |
| "section_id": "4.3", |
| "parent_section_id": "4", |
| "section_name": "4.3. Evaluation of Probing", |
| "text": "As probing tasks are more tests for the presence of linguistic knowledge rather than explanations, the evaluation of probing tasks differs according to the tasks. However, careful consideration should be given to the choice of metric. As Hall Maudslay et al. (2020 ###reference_b63###) showed, different evaluation metrics can result in different apparent performances for different methods, so the motivation behind a particular metric should be considered. Beyond metrics, Hewitt and Liang (2019 ###reference_b72###) suggested that the selectivity of probes should also be considered, where selectivity is defined as the difference between probe task accuracy and control task555A control task being a variant of the probe task which utilises random outputs to ensure that high scores on the task are only possible through \u2018memorisation\u2019 by the probe. accuracy. While best practices for probes are still being actively discussed in the community (Pimentel et al., 2020 ###reference_b133###), control tasks are undoubtedly helpful tools for further investigating and validating the behaviour of models uncovered by probes." |
| }, |
| { |
| "section_id": "5", |
| "parent_section_id": null, |
| "section_name": "5. Discussion and Conclusion", |
| "text": "This paper focused on the local interpretable methods commonly used for natural language processing models. In this survey, we have divided these methods into three different categories based on their underlying characteristics: 1) explaining the model\u2019s outputs from the input features, where these features could be identified through rationale extraction, perturbing inputs, traditional attribution methods, and attention weight extraction; 2) generating the natural language explanations corresponding to each input; 3) using diagnostic classifiers to analyse the hidden information stored within a model. For each method type, we have also outlined the standard datasets used for different NLP tasks and different evaluation methods for examining the validity and efficacy of the explanations provided.\nBy going through the current local interpretable methods in the field of NLP, we identified several limitations and research gaps to be overcome to develop explanations that can stably and faithfully explain the model\u2019s decisions and be easily understood and trusted by users. Firstly, as we have stated in section 1.1.1 ###reference_.SSS1###, there is currently no unified definition of interpretability across the interpretable method works. While some researchers distinguish interpretability and explainability as two separate concepts (Rudin, 2018 ###reference_b151###) with different difficulty levels, many works use them as synonyms of each other, and our work also follows this way to include diverse works. However, such an ambiguous definition of interpretability/explainability leads to inconsistent interpretation validity for the same interpretable method. For example, the debate about whether the attention weights can be used as a valid interpretation/explanation between Wiegreffe and\nPinter (2019b ###reference_b186###) and Jain and Wallace (2019b ###reference_b82###) is due to the conflicting definition. The argument of Jain and Wallace (2019b ###reference_b82###) is based on the fact that only the faithful interpretable methods are truly interpretable, while Wiegreffe and\nPinter (2019b ###reference_b186###) argued that attention is an explanation if we accept that explanation should be plausible but not necessarily faithful as proposed by (Rudin, 2018 ###reference_b151###). Thus, we need a unified and legible definition of interpretability that should be broadly acknowledged and agreed to help further develop valid interpretable methods.\nSecondly, we need effective evaluation methods that can evaluate the multiple dimensions of interpretability, the results of which can be reliable for future baseline comparison. However, the existing evaluation metrics measure only limited interpretability dimensions. Taking the evaluation of rationales as an example, examining the matching between the extracted rationales and the human rationales only evaluates the plausibility but not faithfulness (DeYoung et al., 2020 ###reference_b44###). However, when it comes to the faithfulness evaluation metrics (Chrysostomou and\nAletras, 2021 ###reference_b34###; Serrano and Smith, 2019b ###reference_b154###; DeYoung et al., 2020 ###reference_b44###; Arya\net al., 2019 ###reference_b11###), the evaluation results on the same dataset can be opposite by using different evaluation metrics. For example, two evaluation metrics DFFOT (Serrano and Smith, 2019b ###reference_b154###) and SUFF (DeYoung et al., 2020 ###reference_b44###) concludes opposite evaluation results on LIME method of the same dataset (Chan\net al., 2022 ###reference_b29###). Moreover, the current automatic evaluation approaches mainly focus on the faithfulness and comprehensibility of interpretation. It can hardly be applied to evaluate the other dimensions, such as stability and trustworthy. The evaluation of other interpretability dimensions relies too much on the human evaluation process. Though human evaluation is currently the best approach to evaluate the generated interpretation from various aspects but can be subjective and less reproducible. In addition, it is also essential to have efficient evaluation methods that can evaluate the validity of interpretation in different formats. For example, the evaluation of the faithful NLE relies on the BLEU scores to check the similarity of generated explanations with the ground truth explanations. However, such evaluation methods neglect that the natural language explanations with different contents from the ground truth explanations can also be faithful and plausible for the same input and output pair. To sum up, there is still a considerable research gap for developing effective evaluation methods and frameworks to verify the interpretable methods from various dimensions. and such development would also require explainable datasets with good-quality annotations. The evaluation framework should provide fair results that can be reused and compared by future works, and should be user-centric, taking into account the aspects of different groups of users (Kaur et al., 2020 ###reference_b86###)." |
| }, |
| { |
| "section_id": "6", |
| "parent_section_id": null, |
| "section_name": "6. Future trend of Interpretability", |
| "text": "The future trend of developing interpretable methods cannot avoid further conquering the current limitations. Developing truly faithful interpretable methods that can precisely explain the model\u2019s decisions is critical to enable the vast application of deep neural networks to crucial fields, including medicine, justice and finance. Faithful interpretable methods and easily understandable interpretations are key to bringing users\u2019 trust to the model\u2019s decisions, especially for users without deep learning knowledge. It is natural for them to question the decisions from an unfamiliar technique. Providing faithful, comprehensible and stable interpretations of a model helps eliminate the questions and uncertainties about using a black-box model for any users.\nHowever, apart from the discussed limitations of the current interpretable methods, one existing problem is that evaluating whether an interpretation is faithful mainly considers the interpretations for the model\u2019s correct predictions. In other words, most existing interpretable works only explain why an instance is correctly predicted but do not give any explanations about why an instance is wrongly predicted. If the explanations of a model\u2019s correct predictions precisely reflect the model\u2019s decision-making process, this interpretable method will usually be regarded as a faithful interpretable method. However, it is also significant and irradiative to generate explanations of the wrong prediction results to investigate and examine which parts of the input instances the model attended to when made the wrong decision and whether those parts can reflect the model\u2019s wrong decision-making process. However, the interpretation and explanation of the model\u2019s wrong prediction are not considered in any existing interpretable works. Some works even directly consider the interpretations generated by their interpretable models for the models\u2019 wrong predictions are invalid and incorrect (Park et al., 2018 ###reference_b128###; Marasovi\u0107 et al., 2020 ###reference_b117###; Wu and Mooney, 2019 ###reference_b188###; Kayser et al., 2021 ###reference_b87###) and therefore, would not being taken into account for the measurement of intepretbality faithfulness. This seems reasonable when the current works are still struggling with developing interpretable methods that can at least faithfully explain the model\u2019s correct predictions. However, the interpretation of a model\u2019s decision should not only be applied to one side but to both correct and wrong prediction results.\nThis also brings us the reflection that the fundamental reason to develop model interpretability is more than providing evidence/support/explanation of a correct prediction to users to make them believe the model\u2019s correct decisions, but also to give them valuable guidance about why the model makes a wrong prediction. The comprehensive interpretations of a model\u2019s decisions should provide faithful explanations for the model\u2019s both correct and incorrect predictions. Such comprehensive interpretations from both sides are the key to developing the ultimate trustworthiness for black-box models and boosting their broader and more stable applications in required fields. Moreover, understanding the reason for the wrong prediction is also essential for deep learning researchers to learn and adjust the model better in the future works.\nTherefore, the future works of interpretability would be to fill the current research gap and develop interpretable models that can generate faithful and comprehensible interpretations for both correct and incorrect decisions made by the model, providing reliable information to improve the trust of non-experts in using deep neural networks in crucial fields and help experts understand and improve the model in a more accurate and better way." |
| } |
| ], |
| "appendix": [], |
| "tables": { |
| "1": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>Summary of datasets with natural language explanations for text-based tasks.</figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.2.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.3.1.1\">Ref.</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.2.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.3.2.1\">Year</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.2.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.3.3.1\">Dataset Name</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.2.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.3.4.1\">Task</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.2.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.3.5.1\">Human-written explanations?</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Camburu et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib24\" title=\"\">2018</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.2\">2016</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.3\">e-SNLI</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.4\">NLI</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Jansen et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib84\" title=\"\">2016</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.2\">2016</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.4\">Science Exam QA</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.5.5.1\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.5.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.5.1.1.1\">Extracted from</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.5.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.5.1.2.1\">auxiliary documents</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Ling\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib102\" title=\"\">2017</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.2\">2017</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.4\">Algebraic Word Problems</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.7.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Srivastava\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib163\" title=\"\">2017</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.7.2\">2017</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.7.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.7.4\">Email Phsishing classification</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.7.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Hancock et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib65\" title=\"\">2018</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.2\">2018</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.3\">BabbleLabble</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.4\">Relation Extraction</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Alhindi\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib5\" title=\"\">2018</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.2\">2018</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.3\">LIAR-PLUS</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.4\">Fact-checking</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.9.5.1\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.9.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.5.1.1.1\">Extracted from</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.9.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.5.1.2.1\">auxiliary documents</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.10.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Rajani\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib140\" title=\"\">2019</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.10.2\">2019</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.10.3\">cos-e</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.10.4\">Commonsense QA</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.10.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.11.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Wang\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib177\" title=\"\">2019a</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.11.2\">2019</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.11.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.11.4\">Sense making</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.11.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.12.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Atkinson\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib12\" title=\"\">2019</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.12.2\">2019</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.12.3\">ChangeMyView</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.12.4\">Opinion changing</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.12.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.12.5.1\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.12.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.12.5.1.1.1\">Extracted from</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.12.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.12.5.1.2.1\">reddit posts</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.13.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Zhang\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib200\" title=\"\">2020</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.13.2\">2020</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.13.3\">WinoWhy</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.13.4\">Winograd Schema</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.13.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.14.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Kotonya and Toni, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib91\" title=\"\">2020</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.14.2\">2020</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.14.3\">PubHealth</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.14.4\">Medical claim fact-checking</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.14.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.14.5.1\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.14.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.14.5.1.1.1\">Extracted from</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.14.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.14.5.1.2.1\">auxiliary documents</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.15.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Wang et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib179\" title=\"\">2020</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.15.2\">2020</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.15.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.15.4\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.15.4.1\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.15.4.1.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.2.15.4.1.1.1\">Relation Extraction,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.15.4.1.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.2.15.4.1.2.1\">Sentiment Analysis</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.15.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.16.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Stammbach and Ash, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib166\" title=\"\">2020</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.16.2\">2020</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.16.3\">e-FEVER</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.16.4\">Fact-checking</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.16.5\">Generated using GPT-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.17\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.17.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Aggarwal et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib4\" title=\"\">2021</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.17.2\">2021</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.17.3\">ECQA</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.17.4\">Commonsense QA</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.17.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.2.3\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Brahman\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib23\" title=\"\">2021</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.2.4\">2021</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.1.1\">e--NLI</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.2.2\">\n-NLI Rationale Generation</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.2.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.2.5.1\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.5.1.1.1\">Extracted from</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.5.1.2.1\">auxiliary documents,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.5.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.5.1.3.1\">automatically generated</td>\n</tr>\n</table>\n</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 1. Summary of datasets with natural language explanations for text-based tasks." |
| }, |
| "2": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2. </span>Summary of datasets with natural language explanations for multimodal tasks.</figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.1\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1\">Ref.</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.2.1\">Year</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.3.1\">Dataset Name</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.4.1\">Task</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.5\">Human-written explanations?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.2.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Huk\u00a0Park et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib75\" title=\"\">2018</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.2.2\">2018</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.2.3\">VQA-X</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.2.4\">Visual QA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.2.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.3.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Huk\u00a0Park et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib75\" title=\"\">2018</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.3.2\">2018</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.3.3\">ACT-X</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.3.4\">Activity Recognition</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.3.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.4.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Kim\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib88\" title=\"\">2018</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.4.2\">2018</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.4.3\">BDD-X</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.4.4\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.1.4.4.1\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.4.4.1.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.1.4.4.1.1.1\">Self-Driving Car</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.4.4.1.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.1.4.4.1.2.1\">Decision Explanation</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.4.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Li\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib97\" title=\"\">2018</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.2\">2018</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.3\">VQA-E</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.4\">Visual QA</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.5\">Generated from captions</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.6.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Ehsan et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib52\" title=\"\">2019</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.6.2\">2019</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.6.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.6.4\">Frogger Game</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.6.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Zellers\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib198\" title=\"\">2019</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.2\">2019</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.3\">VCR</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.4\">Visual Commonsense Reasoning</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.8.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Rajani et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib141\" title=\"\">2020</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.8.2\">2020</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.8.3\">ESPIRIT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.8.4\">Physical Reasoning</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.8.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Lei\net\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib94\" title=\"\">2020</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.2\">2020</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.3\">VLEP</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.4\">Event Prediction</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.10.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Da et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib39\" title=\"\">2021</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.10.2\">2021</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.10.3\">EMU</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.10.4\">Understanding edits</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.10.5\">\u2713</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.11.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Kayser et\u00a0al<span class=\"ltx_text\">.</span>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2103.11072v3#bib.bib87\" title=\"\">2021</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.11.2\">2021</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.11.3\">E-ViL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.11.4\">Vision-language Tasks</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.11.5\">\u2713</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 2. Summary of datasets with natural language explanations for multimodal tasks." |
| } |
| }, |
| "image_paths": { |
| "1": { |
| "figure_path": "2103.11072v3_figure_1.png", |
| "caption": "Figure 1. Sample visualizations of identified important features from the inputs detected by four different methods. (a): Rationale Extraction on sentiment analysis task; (b) Attention Weights on Visual Question Answering task: (c) Word importance from Attribution methods on machine translation task; (d) Input perturbation on sentiment analysis task and the expansion of counterfactual explanation.", |
| "url": "http://arxiv.org/html/2103.11072v3/extracted/5430162/feature_importance.png" |
| } |
| }, |
| "validation": true, |
| "references": [ |
| { |
| "1": { |
| "title": "Peeking inside the black-box: A survey on\nExplainable Artificial Intelligence (XAI).", |
| "author": "Amina Adadi and Mohammed\nBerrada. 2018.", |
| "venue": "IEEE Access 6\n(2018), 52138\u201352160.", |
| "url": null |
| } |
| }, |
| { |
| "2": { |
| "title": "Fine-grained analysis of sentence embeddings using\nauxiliary prediction tasks.", |
| "author": "Yossi Adi, Einat Kermany,\nYonatan Belinkov, Ofer Lavi, and\nYoav Goldberg. 2016.", |
| "venue": "arXiv preprint arXiv:1608.04207\n(2016).", |
| "url": null |
| } |
| }, |
| { |
| "3": { |
| "title": "Explanations for CommonsenseQA: New\nDataset and Models. In Proceedings of the 59th\nAnnual Meeting of the Association for Computational Linguistics and the 11th\nInternational Joint Conference on Natural Language Processing (Volume 1: Long\nPapers). Association for Computational Linguistics,\nOnline, 3050\u20133065.", |
| "author": "Shourya Aggarwal,\nDivyanshu Mandowara, Vishwajeet Agrawal,\nDinesh Khandelwal, Parag Singla, and\nDinesh Garg. 2021.", |
| "venue": "https://doi.org/10.18653/v1/2021.acl-long.238", |
| "url": null |
| } |
| }, |
| { |
| "4": { |
| "title": "Where is Your Evidence: Improving Fact-checking by\nJustification Modeling. In Proceedings of the\nFirst Workshop on Fact Extraction and VERification (FEVER).\nAssociation for Computational Linguistics,\nBrussels, Belgium, 85\u201390.", |
| "author": "Tariq Alhindi, Savvas\nPetridis, and Smaranda Muresan.\n2018.", |
| "venue": "https://doi.org/10.18653/v1/W18-5513", |
| "url": null |
| } |
| }, |
| { |
| "5": { |
| "title": "A causal framework for explaining the predictions\nof black-box sequence-to-sequence models. In\nProceedings of the 2017 Conference on Empirical\nMethods in Natural Language Processing. 412\u2013421.", |
| "author": "David Alvarez-Melis and\nTommi Jaakkola. 2017.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "6": { |
| "title": "Spice: Semantic propositional image caption\nevaluation. In European Conference on Computer\nVision. Springer, 382\u2013398.", |
| "author": "Peter Anderson, Basura\nFernando, Mark Johnson, and Stephen\nGould. 2016.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "7": { |
| "title": "Bottom-up and top-down attention for image\ncaptioning and visual question answering. In\nProceedings of the IEEE conference on computer\nvision and pattern recognition. 6077\u20136086.", |
| "author": "Peter Anderson, Xiaodong\nHe, Chris Buehler, Damien Teney,\nMark Johnson, Stephen Gould, and\nLei Zhang. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "8": { |
| "title": "Vqa: Visual question answering. In\nProceedings of the IEEE international conference on\ncomputer vision. 2425\u20132433.", |
| "author": "Stanislaw Antol, Aishwarya\nAgrawal, Jiasen Lu, Margaret Mitchell,\nDhruv Batra, C Lawrence Zitnick, and\nDevi Parikh. 2015.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "9": { |
| "title": "\u201d What is relevant in a text document?\u201d: An\ninterpretable machine learning approach.", |
| "author": "Leila Arras, Franziska\nHorn, Gr\u00e9goire Montavon, Klaus-Robert\nM\u00fcller, and Wojciech Samek.\n2017.", |
| "venue": "PloS one 12,\n8 (2017), e0181142.", |
| "url": null |
| } |
| }, |
| { |
| "10": { |
| "title": "One explanation does not fit all: A toolkit and\ntaxonomy of ai explainability techniques.", |
| "author": "Vijay Arya, Rachel KE\nBellamy, Pin-Yu Chen, Amit Dhurandhar,\nMichael Hind, Samuel C Hoffman,\nStephanie Houde, Q Vera Liao,\nRonny Luss, Aleksandra Mojsilovi\u0107,\net al. 2019.", |
| "venue": "arXiv preprint arXiv:1909.03012\n(2019).", |
| "url": null |
| } |
| }, |
| { |
| "11": { |
| "title": "What Gets Echoed? Understanding the\n\u201cPointers\u201d in Explanations of Persuasive Arguments. In\nProceedings of the 2019 Conference on Empirical\nMethods in Natural Language Processing and the 9th International Joint\nConference on Natural Language Processing (EMNLP-IJCNLP).\nAssociation for Computational Linguistics,\nHong Kong, China, 2911\u20132921.", |
| "author": "David Atkinson,\nKumar Bhargav Srinivasan, and Chenhao\nTan. 2019.", |
| "venue": "https://doi.org/10.18653/v1/D19-1289", |
| "url": null |
| } |
| }, |
| { |
| "12": { |
| "title": "Generating Rationales in Visual Question\nAnswering.", |
| "author": "Hammad A Ayyubi, Md\nTanjim, Julian J McAuley, Garrison W\nCottrell, et al. 2020.", |
| "venue": "arXiv preprint arXiv:2004.02032\n(2020).", |
| "url": null |
| } |
| }, |
| { |
| "13": { |
| "title": "On pixel-wise explanations for non-linear\nclassifier decisions by layer-wise relevance propagation.", |
| "author": "Sebastian Bach, Alexander\nBinder, Gr\u00e9goire Montavon, Frederick\nKlauschen, Klaus-Robert M\u00fcller, and\nWojciech Samek. 2015.", |
| "venue": "PloS one 10,\n7 (2015).", |
| "url": null |
| } |
| }, |
| { |
| "14": { |
| "title": "Neural machine translation by jointly learning to\nalign and translate. In ICLR.", |
| "author": "Dzmitry Bahdanau,\nKyunghyun Cho, and Yoshua Bengio.\n2014.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "15": { |
| "title": "Why attentions may not be interpretable?. In\nProceedings of the 27th ACM SIGKDD Conference on\nKnowledge Discovery & Data Mining. 25\u201334.", |
| "author": "Bing Bai, Jian Liang,\nGuanhua Zhang, Hao Li,\nKun Bai, and Fei Wang.\n2021.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "16": { |
| "title": "How much should you ask? On the question structure\nin QA systems. In BlackboxNLP@EMNLP.", |
| "author": "Dominika Basaj, Barbara\nRychalska, Przemyslaw Biecek, and Anna\nWr\u00f3blewska. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "17": { |
| "title": "Interpretable Neural Predictions with\nDifferentiable Binary Variables. In Proceedings of\nthe 57th Annual Meeting of the Association for Computational Linguistics.\n2963\u20132977.", |
| "author": "Joost Bastings, Wilker\nAziz, and Ivan Titov. 2019.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "18": { |
| "title": "What do Neural Machine Translation Models Learn\nabout Morphology?. In Proceedings of the 55th\nAnnual Meeting of the Association for Computational Linguistics (Volume 1:\nLong Papers). Association for Computational\nLinguistics, Vancouver, Canada,\n861\u2013872.", |
| "author": "Yonatan Belinkov, Nadir\nDurrani, Fahim Dalvi, Hassan Sajjad,\nand James Glass. 2017a.", |
| "venue": "https://doi.org/10.18653/v1/P17-1080", |
| "url": null |
| } |
| }, |
| { |
| "19": { |
| "title": "Analysis methods in neural language processing: A\nsurvey.", |
| "author": "Yonatan Belinkov and\nJames Glass. 2019.", |
| "venue": "Transactions of the Association for\nComputational Linguistics 7 (2019),\n49\u201372.", |
| "url": null |
| } |
| }, |
| { |
| "20": { |
| "title": "Evaluating Layers of Representation in Neural\nMachine Translation on Part-of-Speech and Semantic Tagging Tasks. In\nProceedings of the Eighth International Joint\nConference on Natural Language Processing (Volume 1: Long Papers).\nAsian Federation of Natural Language Processing,\nTaipei, Taiwan, 1\u201310.", |
| "author": "Yonatan Belinkov,\nLlu\u00eds M\u00e0rquez, Hassan Sajjad,\nNadir Durrani, Fahim Dalvi, and\nJames Glass. 2017b.", |
| "venue": "https://www.aclweb.org/anthology/I17-1001", |
| "url": null |
| } |
| }, |
| { |
| "21": { |
| "title": "A large annotated corpus for learning natural\nlanguage inference. In EMNLP.", |
| "author": "Samuel R Bowman, Gabor\nAngeli, Christopher Potts, and\nChristopher D Manning. 2015.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "22": { |
| "title": "Learning to Rationalize for Nonmonotonic Reasoning\nwith Distant Supervision.", |
| "author": "Faeze Brahman, Vered\nShwartz, Rachel Rudinger, and Yejin\nChoi. 2021.", |
| "venue": "Proceedings of the AAAI Conference on\nArtificial Intelligence 35, 14\n(May 2021), 12592\u201312601.", |
| "url": null |
| } |
| }, |
| { |
| "23": { |
| "title": "e-SNLI: Natural Language Inference with Natural\nLanguage Explanations.", |
| "author": "Oana-Maria Camburu, Tim\nRockt\u00e4schel, Thomas Lukasiewicz, and\nPhil Blunsom. 2018.", |
| "venue": "In Advances in Neural Information\nProcessing Systems 31, S. Bengio,\nH. Wallach, H. Larochelle,\nK. Grauman, N. Cesa-Bianchi, and\nR. Garnett (Eds.). Curran Associates,\nInc., 9539\u20139549.", |
| "url": null |
| } |
| }, |
| { |
| "24": { |
| "title": "Make Up Your Mind! Adversarial Generation of\nInconsistent Natural Language Explanations. In\nProceedings of the 58th Annual Meeting of the\nAssociation for Computational Linguistics. Association\nfor Computational Linguistics, Online,\n4157\u20134165.", |
| "author": "Oana-Maria Camburu,\nBrendan Shillingford, Pasquale Minervini,\nThomas Lukasiewicz, and Phil Blunsom.\n2020.", |
| "venue": "https://www.aclweb.org/anthology/2020.acl-main.382", |
| "url": null |
| } |
| }, |
| { |
| "25": { |
| "title": "Scenegate: Scene-graph based co-attention networks\nfor text visual question answering.", |
| "author": "Feiqi Cao, Siwen Luo,\nFelipe Nunez, Zean Wen,\nJosiah Poon, and Soyeon Caren Han.\n2023.", |
| "venue": "Robotics 12,\n4 (2023), 114.", |
| "url": null |
| } |
| }, |
| { |
| "26": { |
| "title": "Evaluating and Characterizing Human Rationales. In\nProceedings of the 2020 Conference on Empirical\nMethods in Natural Language Processing (EMNLP).\nAssociation for Computational Linguistics,\nOnline, 9294\u20139307.", |
| "author": "Samuel Carton, Anirudh\nRathore, and Chenhao Tan.\n2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.emnlp-main.747", |
| "url": null |
| } |
| }, |
| { |
| "27": { |
| "title": "Interpretability of deep learning models: A survey\nof results. In 2017 IEEE SmartWorld, Ubiquitous\nIntelligence Computing, Advanced Trusted Computed, Scalable Computing\nCommunications, Cloud Big Data Computing, Internet of People and Smart City\nInnovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI).\nIEEE, San Francisco, CA, USA,\n1\u20136.", |
| "author": "S. Chakraborty, R.\nTomsett, R. Raghavendra, D.\nHarborne, M. Alzantot, F. Cerutti,\nM. Srivastava, A. Preece,\nS. Julier, R. M. Rao,\nT. D. Kelley, D. Braines,\nM. Sensoy, C. J. Willis, and\nP. Gurram. 2017.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "28": { |
| "title": "A Comparative Study of Faithfulness Metrics for\nModel Interpretability Methods. In Proceedings of\nthe 60th Annual Meeting of the Association for Computational Linguistics\n(Volume 1: Long Papers). 5029\u20135038.", |
| "author": "Chun Sik Chan, Huanqi\nKong, and Liang Guanqing.\n2022.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "29": { |
| "title": "A Game Theoretic Approach to Class-wise Selective\nRationalization. In Advances in Neural Information\nProcessing Systems. 10055\u201310065.", |
| "author": "Shiyu Chang, Yang Zhang,\nMo Yu, and Tommi Jaakkola.\n2019.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "30": { |
| "title": "Learning to explain: An information-theoretic\nperspective on model interpretation. In\nInternational Conference on Machine Learning.\nPMLR, 883\u2013892.", |
| "author": "Jianbo Chen, Le Song,\nMartin Wainwright, and Michael Jordan.\n2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "31": { |
| "title": "KACE: Generating Knowledge Aware Contrastive\nExplanations for Natural Language Inference. In\nProceedings of the 59th Annual Meeting of the\nAssociation for Computational Linguistics and the 11th International Joint\nConference on Natural Language Processing (Volume 1: Long Papers).\n2516\u20132527.", |
| "author": "Qianglong Chen, Feng Ji,\nXiangji Zeng, Feng-Lin Li,\nJi Zhang, Haiqing Chen, and\nYin Zhang. 2021.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "32": { |
| "title": "Uniter: Universal image-text representation\nlearning. In ECCV.", |
| "author": "Yen-Chun Chen, Linjie Li,\nLicheng Yu, Ahmed El Kholy,\nFaisal Ahmed, Zhe Gan,\nYu Cheng, and Jingjing Liu.\n2020.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "33": { |
| "title": "Improving the Faithfulness of Attention-based\nExplanations with Task-specific Information for Text Classification. In\nProceedings of the 59th Annual Meeting of the\nAssociation for Computational Linguistics and the 11th International Joint\nConference on Natural Language Processing (Volume 1: Long Papers).\n477\u2013488.", |
| "author": "George Chrysostomou and\nNikolaos Aletras. 2021.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "34": { |
| "title": "What Does BERT Look at? An Analysis of BERT\u2019s\nAttention. In Proceedings of the 2019 ACL Workshop\nBlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nFlorence, Italy, 276\u2013286.", |
| "author": "Kevin Clark, Urvashi\nKhandelwal, Omer Levy, and\nChristopher D. Manning. 2019a.", |
| "venue": "https://doi.org/10.18653/v1/W19-4828", |
| "url": null |
| } |
| }, |
| { |
| "35": { |
| "title": "What Does BERT Look at? An Analysis of BERT\u2019s\nAttention. In Proceedings of the 2019 ACL Workshop\nBlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\n276\u2013286.", |
| "author": "Kevin Clark, Urvashi\nKhandelwal, Omer Levy, and\nChristopher D Manning. 2019b.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "36": { |
| "title": "A Study of Automatic Metrics for the Evaluation of\nNatural Language Explanations. In Proceedings of\nthe 16th Conference of the European Chapter of the Association for\nComputational Linguistics: Main Volume. Association for\nComputational Linguistics, Online,\n2376\u20132387.", |
| "author": "Miruna-Adriana Clinciu,\nArash Eshghi, and Helen Hastie.\n2021.", |
| "venue": "https://doi.org/10.18653/v1/2021.eacl-main.202", |
| "url": null |
| } |
| }, |
| { |
| "37": { |
| "title": "What you can cram into a single $&!#*\nvector: Probing sentence embeddings for linguistic properties. In\nProceedings of the 56th Annual Meeting of the\nAssociation for Computational Linguistics (Volume 1: Long Papers).\nAssociation for Computational Linguistics,\nMelbourne, Australia, 2126\u20132136.", |
| "author": "Alexis Conneau, German\nKruszewski, Guillaume Lample, Lo\u00efc\nBarrault, and Marco Baroni.\n2018.", |
| "venue": "https://doi.org/10.18653/v1/P18-1198", |
| "url": null |
| } |
| }, |
| { |
| "38": { |
| "title": "Edited Media Understanding Frames: Reasoning About\nthe Intent and Implications of Visual Misinformation. In\nProceedings of the 59th Annual Meeting of the\nAssociation for Computational Linguistics and the 11th International Joint\nConference on Natural Language Processing (Volume 1: Long Papers).\nAssociation for Computational Linguistics,\nOnline, 2026\u20132039.", |
| "author": "Jeff Da, Maxwell Forbes,\nRowan Zellers, Anthony Zheng,\nJena D. Hwang, Antoine Bosselut, and\nYejin Choi. 2021.", |
| "venue": "https://doi.org/10.18653/v1/2021.acl-long.158", |
| "url": null |
| } |
| }, |
| { |
| "39": { |
| "title": "What is one grain of sand in the desert? analyzing\nindividual neurons in deep nlp models. In\nProceedings of the AAAI Conference on Artificial\nIntelligence, Vol. 33. 6309\u20136317.", |
| "author": "Fahim Dalvi, Nadir\nDurrani, Hassan Sajjad, Yonatan\nBelinkov, Anthony Bau, and James\nGlass. 2019.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "40": { |
| "title": "Meteor Universal: Language Specific Translation\nEvaluation for Any Target Language. In Proceedings\nof the EACL 2014 Workshop on Statistical Machine Translation.", |
| "author": "Michael Denkowski and\nAlon Lavie. 2014.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "41": { |
| "title": "BERT: Pre-training of Deep Bidirectional\nTransformers for Language Understanding. In\nProceedings of the 2019 Conference of the North\nAmerican Chapter of the Association for Computational Linguistics: Human\nLanguage Technologies, Volume 1 (Long and Short Papers).\nAssociation for Computational Linguistics,\nMinneapolis, Minnesota, 4171\u20134186.", |
| "author": "Jacob Devlin, Ming-Wei\nChang, Kenton Lee, and Kristina\nToutanova. 2019a.", |
| "venue": "https://doi.org/10.18653/v1/N19-1423", |
| "url": null |
| } |
| }, |
| { |
| "42": { |
| "title": "BERT: Pre-training of Deep Bidirectional\nTransformers for Language Understanding. In\nProceedings of the 2019 Conference of the North\nAmerican Chapter of the Association for Computational Linguistics: Human\nLanguage Technologies, Volume 1 (Long and Short Papers).\nAssociation for Computational Linguistics,\nMinneapolis, Minnesota, 4171\u20134186.", |
| "author": "Jacob Devlin, Ming-Wei\nChang, Kenton Lee, and Kristina\nToutanova. 2019b.", |
| "venue": "https://doi.org/10.18653/v1/N19-1423", |
| "url": null |
| } |
| }, |
| { |
| "43": { |
| "title": "ERASER: A Benchmark to Evaluate Rationalized NLP\nModels. In Proceedings of the 58th Annual Meeting\nof the Association for Computational Linguistics.\n4443\u20134458.", |
| "author": "Jay DeYoung, Sarthak\nJain, Nazneen Fatema Rajani, Eric\nLehman, Caiming Xiong, Richard Socher,\nand Byron C Wallace. 2020.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "44": { |
| "title": "Visualizing and understanding neural machine\ntranslation. In Proceedings of the 55th Annual\nMeeting of the Association for Computational Linguistics (Volume 1: Long\nPapers). 1150\u20131159.", |
| "author": "Yanzhuo Ding, Yang Liu,\nHuanbo Luan, and Maosong Sun.\n2017.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "45": { |
| "title": "VQA: A New Dataset for Real-World VQA on PDF\nDocuments.", |
| "author": "Yihao Ding, Siwen Luo,\nHyunsuk Chung, and Soyeon Caren Han.\n2023.", |
| "venue": "arXiv preprint arXiv:2304.06447\n(2023).", |
| "url": null |
| } |
| }, |
| { |
| "46": { |
| "title": "Considerations for Evaluation and\nGeneralization in Interpretable Machine Learning.", |
| "author": "Finale Doshi-Velez and\nBeen Kim. 2018.", |
| "venue": "Springer International Publishing,\nCham, 3\u201317.", |
| "url": null |
| } |
| }, |
| { |
| "47": { |
| "title": "Learning Credible Deep Neural Networks with\nRationale Regularization.", |
| "author": "Mengnan Du, Ninghao Liu,\nFan Yang, and Xia Hu.\n2019a.", |
| "venue": "2019 IEEE International Conference on Data\nMining (ICDM) (2019), 150\u2013159.", |
| "url": null |
| } |
| }, |
| { |
| "48": { |
| "title": "On attribution of recurrent neural network\npredictions via additive decomposition. In The\nWorld Wide Web Conference. 383\u2013393.", |
| "author": "Mengnan Du, Ninghao Liu,\nFan Yang, Shuiwang Ji, and\nXia Hu. 2019b.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "49": { |
| "title": "HotFlip: White-Box Adversarial Examples for Text\nClassification. In Proceedings of the 56th Annual\nMeeting of the Association for Computational Linguistics (Volume 2: Short\nPapers). 31\u201336.", |
| "author": "Javid Ebrahimi, Anyi Rao,\nDaniel Lowd, and Dejing Dou.\n2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "50": { |
| "title": "Rationalization: A neural machine translation\napproach to generating natural language explanations. In\nProceedings of the 2018 AAAI/ACM Conference on AI,\nEthics, and Society. 81\u201387.", |
| "author": "Upol Ehsan, Brent\nHarrison, Larry Chan, and Mark O\nRiedl. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "51": { |
| "title": "Automated Rationale Generation: A Technique for\nExplainable AI and Its Effects on Human Perceptions. In\nProceedings of the 24th International Conference on\nIntelligent User Interfaces (Marina del Ray, California)\n(IUI \u201919). Association for\nComputing Machinery, New York, NY, USA,\n263\u2013274.", |
| "author": "Upol Ehsan, Pradyumna\nTambwekar, Larry Chan, Brent Harrison,\nand Mark O. Riedl. 2019.", |
| "venue": "https://doi.org/10.1145/3301275.3302316", |
| "url": null |
| } |
| }, |
| { |
| "52": { |
| "title": "Amnesic Probing: Behavioral Explanation with\nAmnesic Counterfactuals.", |
| "author": "Yanai Elazar, Shauli\nRavfogel, Alon Jacovi, and Yoav\nGoldberg. 2021.", |
| "venue": "Transactions of the Association for\nComputational Linguistics 9 (03\n2021), 160\u2013175.", |
| "url": null |
| } |
| }, |
| { |
| "53": { |
| "title": "Cross-Domain Transfer of Generative Explanations\nUsing Text-to-Text Models. In Natural Language\nProcessing and Information Systems,\nElisabeth M\u00e9tais,\nFarid Meziane, Helmut Horacek, and\nEpaminondas Kapetanios (Eds.).\nSpringer International Publishing,\nCham, 76\u201389.", |
| "author": "Karl Fredrik Erliksson,\nAnders Arpteg, Mihhail Matskin, and\nAmir H. Payberah. 2021.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "54": { |
| "title": "Probing for semantic evidence of composition by\nmeans of simple classification tasks. In\nProceedings of the 1st Workshop on Evaluating\nVector-Space Representations for NLP. Association for\nComputational Linguistics, Berlin, Germany,\n134\u2013139.", |
| "author": "Allyson Ettinger, Ahmed\nElgohary, and Philip Resnik.\n2016.", |
| "venue": "https://doi.org/10.18653/v1/W16-2524", |
| "url": null |
| } |
| }, |
| { |
| "55": { |
| "title": "Causal Inference in Natural Language Processing:\nEstimation, Prediction, Interpretation and Beyond.", |
| "author": "Amir Feder, Katherine A\nKeith, Emaad Manzoor, Reid Pryzant,\nDhanya Sridhar, Zach Wood-Doughty,\nJacob Eisenstein, Justin Grimmer,\nRoi Reichart, Margaret E Roberts,\net al. 2022.", |
| "venue": "Transactions of the Association for\nComputational Linguistics 10 (2022),\n1138\u20131158.", |
| "url": null |
| } |
| }, |
| { |
| "56": { |
| "title": "Pathologies of Neural Models Make Interpretations\nDifficult. In Proceedings of the 2018 Conference\non Empirical Methods in Natural Language Processing.\n3719\u20133728.", |
| "author": "Shi Feng, Eric Wallace,\nAlvin Grissom II, Mohit Iyyer,\nPedro Rodriguez, and Jordan\nBoyd-Graber. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "57": { |
| "title": "Under the Hood: Using Diagnostic Classifiers to\nInvestigate and Improve how Language Models Track Agreement Information. In\nProceedings of the 2018 EMNLP Workshop\nBlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nBrussels, Belgium, 240\u2013248.", |
| "author": "Mario Giulianelli, Jack\nHarding, Florian Mohnert, Dieuwke\nHupkes, and Willem Zuidema.\n2018.", |
| "venue": "https://doi.org/10.18653/v1/W18-5426", |
| "url": null |
| } |
| }, |
| { |
| "58": { |
| "title": "Generative adversarial nets. In\nAdvances in neural information processing\nsystems. 2672\u20132680.", |
| "author": "Ian Goodfellow, Jean\nPouget-Abadie, Mehdi Mirza, Bing Xu,\nDavid Warde-Farley, Sherjil Ozair,\nAaron Courville, and Yoshua Bengio.\n2014.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "59": { |
| "title": "Making the V in VQA matter: Elevating the role of\nimage understanding in Visual Question Answering. In\nProceedings of the IEEE Conference on Computer\nVision and Pattern Recognition. 6904\u20136913.", |
| "author": "Yash Goyal, Tejas Khot,\nDouglas Summers-Stay, Dhruv Batra, and\nDevi Parikh. 2017.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "60": { |
| "title": "A survey of methods for explaining black box\nmodels.", |
| "author": "Riccardo Guidotti, Anna\nMonreale, Salvatore Ruggieri, Franco\nTurini, Fosca Giannotti, and Dino\nPedreschi. 2018.", |
| "venue": "Comput. Surveys 51,\n5 (2018).", |
| "url": null |
| } |
| }, |
| { |
| "61": { |
| "title": "Distributional vectors encode referential\nattributes. In Proceedings of the 2015 Conference\non Empirical Methods in Natural Language Processing.\nAssociation for Computational Linguistics,\nLisbon, Portugal, 12\u201321.", |
| "author": "Abhijeet Gupta, Gemma\nBoleda, Marco Baroni, and Sebastian\nPad\u00f3. 2015.", |
| "venue": "https://doi.org/10.18653/v1/D15-1002", |
| "url": null |
| } |
| }, |
| { |
| "62": { |
| "title": "A Tale of a Probe and a Parser. In\nProceedings of the 58th Annual Meeting of the\nAssociation for Computational Linguistics. Association\nfor Computational Linguistics, Online,\n7389\u20137395.", |
| "author": "Rowan Hall Maudslay, Josef\nValvoda, Tiago Pimentel, Adina Williams,\nand Ryan Cotterell. 2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.acl-main.659", |
| "url": null |
| } |
| }, |
| { |
| "63": { |
| "title": "VICTR: Visual Information Captured Text\nRepresentation for Text-to-Vision Multimodal Tasks. In\nProceedings of the 28th International Conference on\nComputational Linguistics, Donia Scott,\nNuria Bel, and Chengqing Zong (Eds.).\nInternational Committee on Computational Linguistics,\nBarcelona, Spain (Online), 3107\u20133117.", |
| "author": "Caren Han, Siqu Long,\nSiwen Luo, Kunze Wang, and\nJosiah Poon. 2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.coling-main.277", |
| "url": null |
| } |
| }, |
| { |
| "64": { |
| "title": "Training Classifiers with Natural Language\nExplanations. In Proceedings of the 56th Annual\nMeeting of the Association for Computational Linguistics (Volume 1: Long\nPapers). Association for Computational Linguistics,\nMelbourne, Australia, 1884\u20131895.", |
| "author": "Braden Hancock, Paroma\nVarma, Stephanie Wang, Martin Bringmann,\nPercy Liang, and Christopher R\u00e9.\n2018.", |
| "venue": "https://doi.org/10.18653/v1/P18-1175", |
| "url": null |
| } |
| }, |
| { |
| "65": { |
| "title": "When Can Models Learn From Explanations? A Formal\nFramework for Understanding the Roles of Explanation Data. In\nProceedings of the First Workshop on Learning with\nNatural Language Supervision. Association for\nComputational Linguistics, Dublin, Ireland,\n29\u201339.", |
| "author": "Peter Hase and Mohit\nBansal. 2022.", |
| "venue": "https://doi.org/10.18653/v1/2022.lnls-1.4", |
| "url": null |
| } |
| }, |
| { |
| "66": { |
| "title": "Leakage-Adjusted Simulatability: Can Models\nGenerate Non-Trivial Explanations of Their Behavior in Natural Language?. In\nFindings of the Association for Computational\nLinguistics: EMNLP 2020. Association for Computational\nLinguistics, Online, 4351\u20134367.", |
| "author": "Peter Hase, Shiyue Zhang,\nHarry Xie, and Mohit Bansal.\n2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.findings-emnlp.390", |
| "url": null |
| } |
| }, |
| { |
| "67": { |
| "title": "Towards Understanding Neural Machine Translation\nwith Word Importance. In Proceedings of the 2019\nConference on Empirical Methods in Natural Language Processing and the 9th\nInternational Joint Conference on Natural Language Processing\n(EMNLP-IJCNLP). 952\u2013961.", |
| "author": "Shilin He, Zhaopeng Tu,\nXing Wang, Longyue Wang,\nMichael Lyu, and Shuming Shi.\n2019.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "68": { |
| "title": "Generating visual explanations. In\nEuropean Conference on Computer Vision. Springer,\n3\u201319.", |
| "author": "Lisa Anne Hendricks,\nZeynep Akata, Marcus Rohrbach,\nJeff Donahue, Bernt Schiele, and\nTrevor Darrell. 2016.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "69": { |
| "title": "Generating Counterfactual Explanations with Natural\nLanguage. In ICML Workshop on Human\nInterpretability in Machine Learning. 95\u201398.", |
| "author": "Lisa Anne Hendricks,\nRonghang Hu, Trevor Darrell, and\nZeynep Akata. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "70": { |
| "title": "Causal shapley values: Exploiting causal knowledge\nto explain individual predictions of complex models.", |
| "author": "Tom Heskes, Evi Sijben,\nIoan Gabriel Bucur, and Tom Claassen.\n2020.", |
| "venue": "Advances in neural information processing\nsystems 33 (2020),\n4778\u20134789.", |
| "url": null |
| } |
| }, |
| { |
| "71": { |
| "title": "Designing and Interpreting Probes with Control\nTasks. In Proceedings of the 2019 Conference on\nEmpirical Methods in Natural Language Processing and the 9th International\nJoint Conference on Natural Language Processing (EMNLP-IJCNLP).\nAssociation for Computational Linguistics,\nHong Kong, China, 2733\u20132743.", |
| "author": "John Hewitt and Percy\nLiang. 2019.", |
| "venue": "https://doi.org/10.18653/v1/D19-1275", |
| "url": null |
| } |
| }, |
| { |
| "72": { |
| "title": "A Structural Probe for Finding Syntax in Word\nRepresentations. In Proceedings of the 2019\nConference of the North American Chapter of the Association for\nComputational Linguistics: Human Language Technologies, Volume 1 (Long and\nShort Papers). Association for Computational\nLinguistics, Minneapolis, Minnesota,\n4129\u20134138.", |
| "author": "John Hewitt and\nChristopher D. Manning. 2019.", |
| "venue": "https://doi.org/10.18653/v1/N19-1419", |
| "url": null |
| } |
| }, |
| { |
| "73": { |
| "title": "Long short-term memory.", |
| "author": "Sepp Hochreiter and\nJ\u00fcrgen Schmidhuber. 1997.", |
| "venue": "Neural computation 9,\n8 (1997), 1735\u20131780.", |
| "url": null |
| } |
| }, |
| { |
| "74": { |
| "title": "Multimodal explanations: Justifying decisions and\npointing to the evidence. In Proceedings of the\nIEEE Conference on Computer Vision and Pattern Recognition.\n8779\u20138788.", |
| "author": "Dong Huk Park, Lisa\nAnne Hendricks, Zeynep Akata, Anna\nRohrbach, Bernt Schiele, Trevor Darrell,\nand Marcus Rohrbach. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "75": { |
| "title": "Visualisation and\u2019diagnostic classifiers\u2019 reveal\nhow recurrent and recursive neural networks process hierarchical structure.", |
| "author": "Dieuwke Hupkes, Sara\nVeldhoen, and Willem Zuidema.\n2018.", |
| "venue": "Journal of Artificial Intelligence Research\n61 (2018), 907\u2013926.", |
| "url": null |
| } |
| }, |
| { |
| "76": { |
| "title": "Summarize-then-Answer: Generating Concise\nExplanations for Multi-hop Reading Comprehension. In\nProceedings of the 2021 Conference on Empirical\nMethods in Natural Language Processing. Association for\nComputational Linguistics, Online and Punta Cana,\nDominican Republic, 6064\u20136080.", |
| "author": "Naoya Inoue, Harsh\nTrivedi, Steven Sinha, Niranjan\nBalasubramanian, and Kentaro Inui.\n2021.", |
| "venue": "https://doi.org/10.18653/v1/2021.emnlp-main.490", |
| "url": null |
| } |
| }, |
| { |
| "77": { |
| "title": "Towards Faithfully Interpretable NLP Systems: How\nShould We Define and Evaluate Faithfulness?. In\nProceedings of the 58th Annual Meeting of the\nAssociation for Computational Linguistics. Association\nfor Computational Linguistics, Online,\n4198\u20134205.", |
| "author": "Alon Jacovi and Yoav\nGoldberg. 2020a.", |
| "venue": "https://www.aclweb.org/anthology/2020.acl-main.386", |
| "url": null |
| } |
| }, |
| { |
| "78": { |
| "title": "Towards Faithfully Interpretable NLP Systems: How\nShould We Define and Evaluate Faithfulness?. In\nProceedings of the 58th Annual Meeting of the\nAssociation for Computational Linguistics. 4198\u20134205.", |
| "author": "Alon Jacovi and Yoav\nGoldberg. 2020b.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "79": { |
| "title": "Formalizing Trust in Artificial Intelligence:\nPrerequisites, Causes and Goals of Human Trust in AI. In\nProceedings of the 2021 ACM Conference on Fairness,\nAccountability, and Transparency (Virtual Event, Canada)\n(FAccT \u201921). Association for\nComputing Machinery, New York, NY, USA,\n624\u2013635.", |
| "author": "Alon Jacovi, Ana\nMarasovi\u0107, Tim Miller, and Yoav\nGoldberg. 2021.", |
| "venue": "https://doi.org/10.1145/3442188.3445923", |
| "url": null |
| } |
| }, |
| { |
| "80": { |
| "title": "Attention is not Explanation. In\nProceedings of the 2019 Conference of the North\nAmerican Chapter of the Association for Computational Linguistics: Human\nLanguage Technologies, Volume 1 (Long and Short Papers).\nAssociation for Computational Linguistics,\nMinneapolis, Minnesota, 3543\u20133556.", |
| "author": "Sarthak Jain and\nByron C. Wallace. 2019a.", |
| "venue": "https://doi.org/10.18653/v1/N19-1357", |
| "url": null |
| } |
| }, |
| { |
| "81": { |
| "title": "Attention is not Explanation. In\nProceedings of the 2019 Conference of the North\nAmerican Chapter of the Association for Computational Linguistics: Human\nLanguage Technologies, Volume 1 (Long and Short Papers).\n3543\u20133556.", |
| "author": "Sarthak Jain and Byron C\nWallace. 2019b.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "82": { |
| "title": "Are Training Resources Insufficient? Predict First\nThen Explain!", |
| "author": "Myeongjun Jang and\nThomas Lukasiewicz. 2021.", |
| "venue": "CoRR abs/2110.02056\n(2021).", |
| "url": null |
| } |
| }, |
| { |
| "83": { |
| "title": "What\u2019s in an Explanation? Characterizing\nKnowledge and Inference Requirements for Elementary Science Exams. In\nProceedings of COLING 2016, the 26th\nInternational Conference on Computational Linguistics: Technical Papers.\nThe COLING 2016 Organizing Committee,\nOsaka, Japan, 2956\u20132965.", |
| "author": "Peter Jansen, Niranjan\nBalasubramanian, Mihai Surdeanu, and\nPeter Clark. 2016.", |
| "venue": "https://aclanthology.org/C16-1278", |
| "url": null |
| } |
| }, |
| { |
| "84": { |
| "title": "Do Language Models Understand Anything? On the\nAbility of LSTMs to Understand Negative Polarity Items. In\nProceedings of the 2018 EMNLP Workshop\nBlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nBrussels, Belgium, 222\u2013231.", |
| "author": "Jaap Jumelet and Dieuwke\nHupkes. 2018.", |
| "venue": "https://doi.org/10.18653/v1/W18-5424", |
| "url": null |
| } |
| }, |
| { |
| "85": { |
| "title": "Interpreting interpretability: understanding data\nscientists\u2019 use of interpretability tools for machine learning. In\nProceedings of the 2020 CHI conference on human\nfactors in computing systems. 1\u201314.", |
| "author": "Harmanpreet Kaur, Harsha\nNori, Samuel Jenkins, Rich Caruana,\nHanna Wallach, and Jennifer\nWortman Vaughan. 2020.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "86": { |
| "title": "E-ViL: A Dataset and Benchmark for Natural Language\nExplanations in Vision-Language Tasks. In\nProceedings of the IEEE/CVF International\nConference on Computer Vision (ICCV). 1244\u20131254.", |
| "author": "Maxime Kayser, Oana-Maria\nCamburu, Leonard Salewski, Cornelius\nEmde, Virginie Do, Zeynep Akata, and\nThomas Lukasiewicz. 2021.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "87": { |
| "title": "Textual explanations for self-driving vehicles. In\nProceedings of the European conference on computer\nvision (ECCV). 563\u2013578.", |
| "author": "Jinkyu Kim, Anna\nRohrbach, Trevor Darrell, John Canny,\nand Zeynep Akata. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "88": { |
| "title": "Spying on Your Neighbors: Fine-grained Probing of\nContextual Embeddings for Information about Surrounding Words. In\nProceedings of the 58th Annual Meeting of the\nAssociation for Computational Linguistics. Association\nfor Computational Linguistics, Online,\n4801\u20134811.", |
| "author": "Josef Klafka and Allyson\nEttinger. 2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.acl-main.434", |
| "url": null |
| } |
| }, |
| { |
| "89": { |
| "title": "What\u2019s in an Embedding? Analyzing Word Embeddings\nthrough Multilingual Evaluation. In Proceedings of\nthe 2015 Conference on Empirical Methods in Natural Language Processing.\nAssociation for Computational Linguistics,\nLisbon, Portugal, 2067\u20132073.", |
| "author": "Arne K\u00f6hn.\n2015.", |
| "venue": "https://doi.org/10.18653/v1/D15-1246", |
| "url": null |
| } |
| }, |
| { |
| "90": { |
| "title": "Explainable Automated Fact-Checking for Public\nHealth Claims. In Proceedings of the 2020\nConference on Empirical Methods in Natural Language Processing (EMNLP).\nAssociation for Computational Linguistics,\nOnline, 7740\u20137754.", |
| "author": "Neema Kotonya and\nFrancesca Toni. 2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.emnlp-main.623", |
| "url": null |
| } |
| }, |
| { |
| "91": { |
| "title": "NILE : Natural Language Inference with Faithful\nNatural Language Explanations. In Proceedings of\nthe 58th Annual Meeting of the Association for Computational Linguistics.\nAssociation for Computational Linguistics,\nOnline, 8730\u20138742.", |
| "author": "Sawan Kumar and Partha\nTalukdar. 2020.", |
| "venue": "https://www.aclweb.org/anthology/2020.acl-main.771", |
| "url": null |
| } |
| }, |
| { |
| "92": { |
| "title": "A generalized probability density function for\ndouble-bounded random processes.", |
| "author": "Ponnambalam Kumaraswamy.\n1980.", |
| "venue": "Journal of hydrology 46,\n1-2 (1980), 79\u201388.", |
| "url": null |
| } |
| }, |
| { |
| "93": { |
| "title": "What is More Likely to Happen Next?\nVideo-and-Language Future Event Prediction. In\nProceedings of the 2020 Conference on Empirical\nMethods in Natural Language Processing (EMNLP).\nAssociation for Computational Linguistics,\nOnline, 8769\u20138784.", |
| "author": "Jie Lei, Licheng Yu,\nTamara Berg, and Mohit Bansal.\n2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.emnlp-main.706", |
| "url": null |
| } |
| }, |
| { |
| "94": { |
| "title": "Rationalizing Neural Predictions. In\nProceedings of the 2016 Conference on Empirical\nMethods in Natural Language Processing. 107\u2013117.", |
| "author": "Tao Lei, Regina Barzilay,\nand Tommi Jaakkola. 2016.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "95": { |
| "title": "Personalized Transformer for Explainable\nRecommendation. In Proceedings of the 59th Annual\nMeeting of the Association for Computational Linguistics and the 11th\nInternational Joint Conference on Natural Language Processing (Volume 1: Long\nPapers). Association for Computational Linguistics,\nOnline, 4947\u20134957.", |
| "author": "Lei Li, Yongfeng Zhang,\nand Li Chen. 2021.", |
| "venue": "https://doi.org/10.18653/v1/2021.acl-long.383", |
| "url": null |
| } |
| }, |
| { |
| "96": { |
| "title": "VQA-E: Explaining, Elaborating, and Enhancing Your\nAnswers for Visual Questions.", |
| "author": "Qing Li, Qingyi Tao,\nShafiq Joty, Jianfei Cai, and\nJiebo Luo. 2018.", |
| "venue": "ECCV (2018).", |
| "url": null |
| } |
| }, |
| { |
| "97": { |
| "title": "Using Interactive Feedback to Improve the Accuracy\nand Explainability of Question Answering Systems Post-Deployment. In\nFindings of the Association for Computational\nLinguistics: ACL 2022. Association for Computational\nLinguistics, Dublin, Ireland, 926\u2013937.", |
| "author": "Zichao Li, Prakhar\nSharma, Xing Han Lu, Jackie Cheung,\nand Siva Reddy. 2022.", |
| "venue": "https://doi.org/10.18653/v1/2022.findings-acl.75", |
| "url": null |
| } |
| }, |
| { |
| "98": { |
| "title": "ROUGE: A Package for Automatic Evaluation of\nSummaries. In Text Summarization Branches Out.\nAssociation for Computational Linguistics,\nBarcelona, Spain, 74\u201381.", |
| "author": "Chin-Yew Lin.\n2004.", |
| "venue": "https://www.aclweb.org/anthology/W04-1013", |
| "url": null |
| } |
| }, |
| { |
| "99": { |
| "title": "Microsoft coco: Common objects in context. In\nEuropean conference on computer vision. Springer,\n740\u2013755.", |
| "author": "Tsung-Yi Lin, Michael\nMaire, Serge Belongie, James Hays,\nPietro Perona, Deva Ramanan,\nPiotr Doll\u00e1r, and C Lawrence\nZitnick. 2014.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "100": { |
| "title": "Open Sesame: Getting inside BERT\u2019s Linguistic\nKnowledge. In Proceedings of the 2019 ACL Workshop\nBlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nFlorence, Italy, 241\u2013253.", |
| "author": "Yongjie Lin, Yi Chern\nTan, and Robert Frank. 2019.", |
| "venue": "https://doi.org/10.18653/v1/W19-4825", |
| "url": null |
| } |
| }, |
| { |
| "101": { |
| "title": "Program Induction by Rationale Generation: Learning\nto Solve and Explain Algebraic Word Problems. In\nProceedings of the 55th Annual Meeting of the\nAssociation for Computational Linguistics (Volume 1: Long Papers).\n158\u2013167.", |
| "author": "Wang Ling, Dani Yogatama,\nChris Dyer, and Phil Blunsom.\n2017.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "102": { |
| "title": "The mythos of model interpretability.", |
| "author": "Zachary C. Lipton.\n2018.", |
| "venue": "Commun. ACM 61,\n10 (2018), 35\u201343.", |
| "url": null |
| } |
| }, |
| { |
| "103": { |
| "title": "Towards Explainable NLP: A Generative Explanation\nFramework for Text Classification. In Proceedings\nof the 57th Annual Meeting of the Association for Computational\nLinguistics. Association for Computational\nLinguistics, Florence, Italy,\n5570\u20135581.", |
| "author": "Hui Liu, Qingyu Yin,\nand William Yang Wang. 2019b.", |
| "venue": "https://doi.org/10.18653/v1/P19-1560", |
| "url": null |
| } |
| }, |
| { |
| "104": { |
| "title": "Linguistic Knowledge and Transferability of\nContextual Representations. In Proceedings of the\n2019 Conference of the North American Chapter of the Association for\nComputational Linguistics: Human Language Technologies, Volume 1 (Long and\nShort Papers). Association for Computational\nLinguistics, Minneapolis, Minnesota,\n1073\u20131094.", |
| "author": "Nelson F. Liu, Matt\nGardner, Yonatan Belinkov, Matthew E.\nPeters, and Noah A. Smith.\n2019a.", |
| "venue": "https://doi.org/10.18653/v1/N19-1112", |
| "url": null |
| } |
| }, |
| { |
| "105": { |
| "title": "Learning Sparse Neural Networks through L_0\nRegularization. In International Conference on\nLearning Representations.", |
| "author": "Christos Louizos, Max\nWelling, and Diederik P Kingma.\n2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "106": { |
| "title": "Predicting Inductive Biases of Pre-Trained Models.\nIn International Conference on Learning\nRepresentations.", |
| "author": "Charles Lovering, Rohan\nJha, Tal Linzen, and Ellie Pavlick.\n2021.", |
| "venue": "https://openreview.net/forum?id=mNtmhaDkAr", |
| "url": null |
| } |
| }, |
| { |
| "107": { |
| "title": "Hierarchical question-image co-attention for visual\nquestion answering. In Advances in neural\ninformation processing systems. 289\u2013297.", |
| "author": "Jiasen Lu, Jianwei Yang,\nDhruv Batra, and Devi Parikh.\n2016.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "108": { |
| "title": "A unified approach to interpreting model\npredictions. In Advances in neural information\nprocessing systems. 4765\u20134774.", |
| "author": "Scott M Lundberg and\nSu-In Lee. 2017.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "109": { |
| "title": "Beyond Polarity: Interpretable Financial Sentiment\nAnalysis with Hierarchical Query-driven Attention.. In\nIJCAI. 4244\u20134250.", |
| "author": "Ling Luo, Xiang Ao,\nFeiyang Pan, Jin Wang,\nTong Zhao, Ningzi Yu, and\nQing He. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "110": { |
| "title": "REXUP: I REason, I EXtract, I UPdate with\nStructured Compositional Reasoning for Visual Question Answering. In\nInternational Conference on Neural Information\nProcessing. Springer, 520\u2013532.", |
| "author": "Siwen Luo, Soyeon Caren\nHan, Kaiyuan Sun, and Josiah Poon.\n2020.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "111": { |
| "title": "Effective Approaches to Attention-based Neural\nMachine Translation. In Proceedings of the 2015\nConference on Empirical Methods in Natural Language Processing.\n1412\u20131421.", |
| "author": "Minh-Thang Luong, Hieu\nPham, and Christopher D Manning.\n2015.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "112": { |
| "title": "Learning word vectors for sentiment analysis. In\nProceedings of the 49th annual meeting of the\nassociation for computational linguistics: Human language technologies.\n142\u2013150.", |
| "author": "Andrew Maas, Raymond E\nDaly, Peter T Pham, Dan Huang,\nAndrew Y Ng, and Christopher Potts.\n2011.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "113": { |
| "title": "Towards a Grounded Dialog Model for Explainable\nArtificial Intelligence.", |
| "author": "Prashan Madumal, Tim\nMiller, Frank Vetere, and Liz\nSonenberg. 2018.", |
| "venue": "Workshop on Socio-cognitive Systems IJCAI\nabs/1806.08055 (2018).", |
| "url": null |
| } |
| }, |
| { |
| "114": { |
| "title": "Aspect-Based Sentiment Classification with\nAttentive Neural Turing Machines.. In IJCAI.\n5139\u20135145.", |
| "author": "Qianren Mao, Jianxin Li,\nSenzhang Wang, Yuanning Zhang,\nHao Peng, Min He, and\nLihong Wang. 2019.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "115": { |
| "title": "Few-Shot Self-Rationalization with Natural Language\nPrompts. In Findings of the Association for\nComputational Linguistics: NAACL 2022. Association for\nComputational Linguistics, Seattle, United States,\n410\u2013424.", |
| "author": "Ana Marasovic, Iz\nBeltagy, Doug Downey, and Matthew\nPeters. 2022.", |
| "venue": "https://doi.org/10.18653/v1/2022.findings-naacl.31", |
| "url": null |
| } |
| }, |
| { |
| "116": { |
| "title": "Natural Language Rationales with Full-Stack Visual\nReasoning: From Pixels to Semantic Frames to Commonsense Graphs. In\nFindings of the Association for Computational\nLinguistics: EMNLP 2020. Association for Computational\nLinguistics, Online, 2810\u20132829.", |
| "author": "Ana Marasovi\u0107, Chandra\nBhagavatula, Jae sung Park, Ronan\nLe Bras, Noah A. Smith, and Yejin\nChoi. 2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.findings-emnlp.253", |
| "url": null |
| } |
| }, |
| { |
| "117": { |
| "title": "Learning attitudes and attributes from multi-aspect\nreviews. In 2012 IEEE 12th International\nConference on Data Mining. IEEE, 1020\u20131025.", |
| "author": "Julian McAuley, Jure\nLeskovec, and Dan Jurafsky.\n2012.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "118": { |
| "title": "Distributed Representations of Words and Phrases\nand their Compositionality.", |
| "author": "Tomas Mikolov, Ilya\nSutskever, Kai Chen, Greg S Corrado,\nand Jeff Dean. 2013.", |
| "venue": "In Advances in Neural Information\nProcessing Systems 26, C. J. C. Burges,\nL. Bottou, M. Welling,\nZ. Ghahramani, and K. Q. Weinberger\n(Eds.). Curran Associates, Inc.,\n3111\u20133119.", |
| "url": null |
| } |
| }, |
| { |
| "119": { |
| "title": "Explaining Face Presentation Attack Detection Using\nNatural Language. In 2021 16th IEEE International\nConference on Automatic Face and Gesture Recognition (FG 2021).\n1\u20138.", |
| "author": "Hengameh Mirzaalian,\nMohamed E. Hussein, Leonidas Spinoulas,\nJonathan May, and Wael Abd-Almageed.\n2021.", |
| "venue": "https://doi.org/10.1109/FG52635.2021.9667024", |
| "url": null |
| } |
| }, |
| { |
| "120": { |
| "title": "Interpretable Machine Learning.", |
| "author": "Christoph Molnar.\n2019.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "121": { |
| "title": "Did the Model Understand the Question?. In\nProceedings of the 56th Annual Meeting of the\nAssociation for Computational Linguistics (Volume 1: Long Papers).\n1896\u20131906.", |
| "author": "Pramod Kaushik Mudrakarta,\nAnkur Taly, Mukund Sundararajan, and\nKedar Dhamdhere. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "122": { |
| "title": "Deep Learning For Dummies.", |
| "author": "John Paul Mueller and\nLuca Massaron. 2019.", |
| "venue": "John Wiley & Sons.", |
| "url": null |
| } |
| }, |
| { |
| "123": { |
| "title": "Definitions, methods, and applications in\ninterpretable machine learning.", |
| "author": "W. James Murdoch, Chandan\nSingh, Karl Kumbier, Reza Abbasi-Asl,\nand Bin Yu. 2019.", |
| "venue": "Proceedings of the National Academy of\nSciences 116, 44\n(2019), 22071\u201322080.", |
| "url": null |
| } |
| }, |
| { |
| "124": { |
| "title": "WT5?! Training Text-to-Text Models to Explain their\nPredictions.", |
| "author": "Sharan Narang, Colin\nRaffel, Katherine Lee, Adam Roberts,\nNoah Fiedel, and Karishma Malkan.\n2020.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "125": { |
| "title": "Justifying Recommendations using Distantly-Labeled\nReviews and Fine-Grained Aspects. In Proceedings\nof the 2019 Conference on Empirical Methods in Natural Language Processing\nand the 9th International Joint Conference on Natural Language Processing\n(EMNLP-IJCNLP). Association for Computational\nLinguistics, Hong Kong, China, 188\u2013197.", |
| "author": "Jianmo Ni, Jiacheng Li,\nand Julian McAuley. 2019.", |
| "venue": "https://doi.org/10.18653/v1/D19-1018", |
| "url": null |
| } |
| }, |
| { |
| "126": { |
| "title": "Bleu: a Method for Automatic Evaluation of\nMachine Translation. In Proceedings of the 40th\nAnnual Meeting of the Association for Computational Linguistics.\nAssociation for Computational Linguistics,\nPhiladelphia, Pennsylvania, USA,\n311\u2013318.", |
| "author": "Kishore Papineni, Salim\nRoukos, Todd Ward, and Wei-Jing Zhu.\n2002.", |
| "venue": "https://doi.org/10.3115/1073083.1073135", |
| "url": null |
| } |
| }, |
| { |
| "127": { |
| "title": "Multimodal Explanations: Justifying Decisions and\nPointing to the Evidence. In 2018 IEEE/CVF\nConference on Computer Vision and Pattern Recognition.\n8779\u20138788.", |
| "author": "Dong Huk Park, Lisa Anne\nHendricks, Zeynep Akata, Anna Rohrbach,\nBernt Schiele, Trevor Darrell, and\nMarcus Rohrbach. 2018.", |
| "venue": "https://doi.org/10.1109/CVPR.2018.00915", |
| "url": null |
| } |
| }, |
| { |
| "128": { |
| "title": "GloVe: Global Vectors for Word Representation.\nIn Proceedings of the 2014 Conference on Empirical\nMethods in Natural Language Processing (EMNLP).\nAssociation for Computational Linguistics,\nDoha, Qatar, 1532\u20131543.", |
| "author": "Jeffrey Pennington,\nRichard Socher, and Christopher\nManning. 2014.", |
| "venue": "https://doi.org/10.3115/v1/D14-1162", |
| "url": null |
| } |
| }, |
| { |
| "129": { |
| "title": "Dissecting Contextual Word Embeddings: Architecture\nand Representation. In Proceedings of the 2018\nConference on Empirical Methods in Natural Language Processing.\nAssociation for Computational Linguistics,\nBrussels, Belgium, 1499\u20131509.", |
| "author": "Matthew Peters, Mark\nNeumann, Luke Zettlemoyer, and Wen-tau\nYih. 2018c.", |
| "venue": "https://doi.org/10.18653/v1/D18-1179", |
| "url": null |
| } |
| }, |
| { |
| "130": { |
| "title": "Deep Contextualized Word Representations. In\nProceedings of the 2018 Conference of the North\nAmerican Chapter of the Association for Computational Linguistics: Human\nLanguage Technologies, Volume 1 (Long Papers).\nAssociation for Computational Linguistics,\nNew Orleans, Louisiana, 2227\u20132237.", |
| "author": "Matthew E. Peters, Mark\nNeumann, Mohit Iyyer, Matt Gardner,\nChristopher Clark, Kenton Lee, and\nLuke Zettlemoyer. 2018a.", |
| "venue": "https://doi.org/10.18653/v1/N18-1202", |
| "url": null |
| } |
| }, |
| { |
| "131": { |
| "title": "Deep contextualized word representations. In\nProceedings of NAACL-HLT.\n2227\u20132237.", |
| "author": "Matthew E Peters, Mark\nNeumann, Mohit Iyyer, Matt Gardner,\nChristopher Clark, Kenton Lee, and\nLuke Zettlemoyer. 2018b.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "132": { |
| "title": "Information-Theoretic Probing for Linguistic\nStructure. In Proceedings of the 58th Annual\nMeeting of the Association for Computational Linguistics.\nAssociation for Computational Linguistics,\nOnline, 4609\u20134622.", |
| "author": "Tiago Pimentel, Josef\nValvoda, Rowan Hall Maudslay, Ran\nZmigrod, Adina Williams, and Ryan\nCotterell. 2020.", |
| "venue": "https://www.aclweb.org/anthology/2020.acl-main.420", |
| "url": null |
| } |
| }, |
| { |
| "133": { |
| "title": "How Accents Confound: Probing for Accent\nInformation in End-to-End Speech Recognition Systems. In\nProceedings of the 58th Annual Meeting of the\nAssociation for Computational Linguistics. Association\nfor Computational Linguistics, Online,\n3739\u20133753.", |
| "author": "Archiki Prasad and\nPreethi Jyothi. 2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.acl-main.345", |
| "url": null |
| } |
| }, |
| { |
| "134": { |
| "title": "To what extent do human explanations of model\nbehavior align with actual model behavior?. In\nProceedings of the Fourth BlackboxNLP Workshop on\nAnalyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nPunta Cana, Dominican Republic, 1\u201314.", |
| "author": "Grusha Prasad, Yixin Nie,\nMohit Bansal, Robin Jia,\nDouwe Kiela, and Adina Williams.\n2021.", |
| "venue": "https://doi.org/10.18653/v1/2021.blackboxnlp-1.1", |
| "url": null |
| } |
| }, |
| { |
| "135": { |
| "title": "Improving Language Understanding by Generative\nPre-Training.", |
| "author": "Alec Radford and Karthik\nNarasimhan. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "136": { |
| "title": "Language Models are Unsupervised Multitask\nLearners.", |
| "author": "Alec Radford, Jeff Wu,\nRewon Child, David Luan,\nDario Amodei, and Ilya Sutskever.\n2019.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "137": { |
| "title": "Exploring the Limits of Transfer Learning with a\nUnified Text-to-Text Transformer.", |
| "author": "Colin Raffel, Noam\nShazeer, Adam Roberts, Katherine Lee,\nSharan Narang, Michael Matena,\nYanqi Zhou, Wei Li, and\nPeter J. Liu. 2020.", |
| "venue": "Journal of Machine Learning Research\n21, 140 (2020),\n1\u201367.", |
| "url": null |
| } |
| }, |
| { |
| "138": { |
| "title": "An Analysis of Encoder Representations in\nTransformer-Based Machine Translation. In\nProceedings of the 2018 EMNLP Workshop\nBlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nBrussels, Belgium, 287\u2013297.", |
| "author": "Alessandro Raganato and\nJ\u00f6rg Tiedemann. 2018.", |
| "venue": "https://doi.org/10.18653/v1/W18-5431", |
| "url": null |
| } |
| }, |
| { |
| "139": { |
| "title": "Explain Yourself! Leveraging Language Models for\nCommonsense Reasoning. In Proceedings of the 57th\nAnnual Meeting of the Association for Computational Linguistics.\nAssociation for Computational Linguistics,\nFlorence, Italy, 4932\u20134942.", |
| "author": "Nazneen Fatema Rajani,\nBryan McCann, Caiming Xiong, and\nRichard Socher. 2019.", |
| "venue": "https://doi.org/10.18653/v1/P19-1487", |
| "url": null |
| } |
| }, |
| { |
| "140": { |
| "title": "ESPRIT: Explaining Solutions to Physical\nReasoning Tasks. In Proceedings of the 58th Annual\nMeeting of the Association for Computational Linguistics.\nAssociation for Computational Linguistics,\nOnline, 7906\u20137917.", |
| "author": "Nazneen Fatema Rajani, Rui\nZhang, Yi Chern Tan, Stephan Zheng,\nJeremy Weiss, Aadit Vyas,\nAbhijit Gupta, Caiming Xiong,\nRichard Socher, and Dragomir Radev.\n2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.acl-main.706", |
| "url": null |
| } |
| }, |
| { |
| "141": { |
| "title": "Know What You Don\u2019t Know: Unanswerable Questions\nfor SQuAD. In Proceedings of the 56th Annual\nMeeting of the Association for Computational Linguistics (Volume 2: Short\nPapers). 784\u2013789.", |
| "author": "Pranav Rajpurkar, Robin\nJia, and Percy Liang. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "142": { |
| "title": "SQuAD: 100,000+ Questions for Machine Comprehension\nof Text. In Proceedings of the 2016 Conference on\nEmpirical Methods in Natural Language Processing.\n2383\u20132392.", |
| "author": "Pranav Rajpurkar, Jian\nZhang, Konstantin Lopyrev, and Percy\nLiang. 2016.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "143": { |
| "title": "Probing the Probing Paradigm: Does Probing Accuracy\nEntail Task Relevance?. In Proceedings of the 16th\nConference of the European Chapter of the Association for Computational\nLinguistics: Main Volume. Association for Computational\nLinguistics, Online, 3363\u20133377.", |
| "author": "Abhilasha Ravichander,\nYonatan Belinkov, and Eduard Hovy.\n2021.", |
| "venue": "https://doi.org/10.18653/v1/2021.eacl-main.295", |
| "url": null |
| } |
| }, |
| { |
| "144": { |
| "title": "XAlgo: A Design Probe of Explaining Algorithms\u2019\nInternal States via Question-Answering (IUI \u201921).\nAssociation for Computing Machinery,\nNew York, NY, USA, 329\u2013339.", |
| "author": "Juan Rebanal, Jordan\nCombitsis, Yuqi Tang, and\nXiang \u2019Anthony\u2019 Chen. 2021.", |
| "venue": "https://doi.org/10.1145/3397481.3450676", |
| "url": null |
| } |
| }, |
| { |
| "145": { |
| "title": "\u201d Why should I trust you?\u201d Explaining the\npredictions of any classifier. In Proceedings of\nthe 22nd ACM SIGKDD international conference on knowledge discovery and data\nmining. 1135\u20131144.", |
| "author": "Marco Tulio Ribeiro,\nSameer Singh, and Carlos Guestrin.\n2016a.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "146": { |
| "title": "\u201cWhy Should I Trust You?\u201d: Explaining the\nPredictions of Any Classifier. In Proceedings of\nthe 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data\nMining (San Francisco, California, USA) (KDD\n\u201916). Association for Computing Machinery,\nNew York, NY, USA, 1135\u20131144.", |
| "author": "Marco Tulio Ribeiro,\nSameer Singh, and Carlos Guestrin.\n2016b.", |
| "venue": "https://doi.org/10.1145/2939672.2939778", |
| "url": null |
| } |
| }, |
| { |
| "147": { |
| "title": "Anchors: High-precision model-agnostic\nexplanations. In Thirty-Second AAAI Conference on\nArtificial Intelligence.", |
| "author": "Marco Tulio Ribeiro,\nSameer Singh, and Carlos Guestrin.\n2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "148": { |
| "title": "Beyond Accuracy: Behavioral Testing of NLP Models\nwith CheckList. In Proceedings of the 58th Annual\nMeeting of the Association for Computational Linguistics.\n4902\u20134912.", |
| "author": "Marco Tulio Ribeiro,\nTongshuang Wu, Carlos Guestrin, and\nSameer Singh. 2020.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "149": { |
| "title": "How Well Do Distributional Models Capture Different\nTypes of Semantic Knowledge?. In Proceedings of\nthe 53rd Annual Meeting of the Association for Computational Linguistics and\nthe 7th International Joint Conference on Natural Language Processing (Volume\n2: Short Papers). Association for Computational\nLinguistics, Beijing, China, 726\u2013730.", |
| "author": "Dana Rubinstein, Effi\nLevi, Roy Schwartz, and Ari\nRappoport. 2015.", |
| "venue": "https://doi.org/10.3115/v1/P15-2119", |
| "url": null |
| } |
| }, |
| { |
| "150": { |
| "title": "Please stop explaining black box models for high\nstakes decisions.", |
| "author": "Cynthia Rudin.\n2018.", |
| "venue": "Stat 1050\n(2018), 26.", |
| "url": null |
| } |
| }, |
| { |
| "151": { |
| "title": "Multitask Prompted Training Enables Zero-Shot Task\nGeneralization. In International Conference on\nLearning Representations.", |
| "author": "Victor Sanh, Albert\nWebson, Colin Raffel, Stephen Bach,\nLintang Sutawika, Zaid Alyafeai,\nAntoine Chaffin, Arnaud Stiegler,\nArun Raja, Manan Dey,\nM Saiful Bari, Canwen Xu,\nUrmish Thakker, Shanya Sharma Sharma,\nEliza Szczechla, Taewoon Kim,\nGunjan Chhablani, Nihal Nayak,\nDebajyoti Datta, Jonathan Chang,\nMike Tian-Jian Jiang, Han Wang,\nMatteo Manica, Sheng Shen,\nZheng Xin Yong, Harshit Pandey,\nRachel Bawden, Thomas Wang,\nTrishala Neeraj, Jos Rozen,\nAbheesht Sharma, Andrea Santilli,\nThibault Fevry, Jason Alan Fries,\nRyan Teehan, Teven Le Scao,\nStella Biderman, Leo Gao,\nThomas Wolf, and Alexander M Rush.\n2022.", |
| "venue": "https://openreview.net/forum?id=9Vrb9D0WI4", |
| "url": null |
| } |
| }, |
| { |
| "152": { |
| "title": "Is Attention Interpretable?. In\nProceedings of the 57th Annual Meeting of the\nAssociation for Computational Linguistics. Association\nfor Computational Linguistics, Florence, Italy,\n2931\u20132951.", |
| "author": "Sofia Serrano and\nNoah A. Smith. 2019a.", |
| "venue": "https://doi.org/10.18653/v1/P19-1282", |
| "url": null |
| } |
| }, |
| { |
| "153": { |
| "title": "Is Attention Interpretable?. In\nProceedings of the 57th Annual Meeting of the\nAssociation for Computational Linguistics. 2931\u20132951.", |
| "author": "Sofia Serrano and Noah A\nSmith. 2019b.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "154": { |
| "title": "Learning from the Best: Rationalizing Predictions\nby Adversarial Information Calibration.. In\nAAAI. 13771\u201313779.", |
| "author": "Lei Sha, Oana-Maria\nCamburu, and Thomas Lukasiewicz.\n2021.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "155": { |
| "title": "Knowledge-aware attentive neural network for\nranking question answer pairs. In The 41st\nInternational ACM SIGIR Conference on Research & Development in Information\nRetrieval. 901\u2013904.", |
| "author": "Ying Shen, Yang Deng,\nMin Yang, Yaliang Li,\nNan Du, Wei Fan, and\nKai Lei. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "156": { |
| "title": "Does String-Based Neural MT Learn Source\nSyntax?. In Proceedings of the 2016 Conference on\nEmpirical Methods in Natural Language Processing.\nAssociation for Computational Linguistics,\nAustin, Texas, 1526\u20131534.", |
| "author": "Xing Shi, Inkit Padhi,\nand Kevin Knight. 2016.", |
| "venue": "https://doi.org/10.18653/v1/D16-1159", |
| "url": null |
| } |
| }, |
| { |
| "157": { |
| "title": "Learning important features through propagating\nactivation differences. In Proceedings of the 34th\nInternational Conference on Machine Learning-Volume 70. JMLR. org,\n3145\u20133153.", |
| "author": "Avanti Shrikumar, Peyton\nGreenside, and Anshul Kundaje.\n2017.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "158": { |
| "title": "Fooling lime and shap: Adversarial attacks on post\nhoc explanation methods. In Proceedings of the\nAAAI/ACM Conference on AI, Ethics, and Society. 180\u2013186.", |
| "author": "Dylan Slack, Sophie\nHilgard, Emily Jia, Sameer Singh, and\nHimabindu Lakkaraju. 2020.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "159": { |
| "title": "Firearms and Tigers are Dangerous, Kitchen Knives\nand Zebras are Not: Testing whether Word Embeddings Can Tell. In\nProceedings of the 2018 EMNLP Workshop\nBlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nBrussels, Belgium, 276\u2013286.", |
| "author": "Pia Sommerauer and\nAntske Fokkens. 2018.", |
| "venue": "https://doi.org/10.18653/v1/W18-5430", |
| "url": null |
| } |
| }, |
| { |
| "160": { |
| "title": "Probing for Referential Information in Language\nModels. In Proceedings of the 58th Annual Meeting\nof the Association for Computational Linguistics.\nAssociation for Computational Linguistics,\nOnline, 4177\u20134189.", |
| "author": "Ionut-Teodor Sorodoc,\nKristina Gulordava, and Gemma Boleda.\n2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.acl-main.384", |
| "url": null |
| } |
| }, |
| { |
| "161": { |
| "title": "Striving for Simplicity: The All Convolutional\nNet. In ICLR (workshop track).", |
| "author": "J Springenberg, Alexey\nDosovitskiy, Thomas Brox, and M\nRiedmiller. 2015.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "162": { |
| "title": "Joint Concept Learning and Semantic Parsing from\nNatural Language Explanations. In Proceedings of\nthe 2017 Conference on Empirical Methods in Natural Language Processing.\nAssociation for Computational Linguistics,\nCopenhagen, Denmark, 1527\u20131536.", |
| "author": "Shashank Srivastava, Igor\nLabutov, and Tom Mitchell.\n2017.", |
| "venue": "https://doi.org/10.18653/v1/D17-1161", |
| "url": null |
| } |
| }, |
| { |
| "163": { |
| "title": "Modeling Paths for Explainable Knowledge Base\nCompletion. In Proceedings of the 2019 ACL\nWorkshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nFlorence, Italy, 147\u2013157.", |
| "author": "Josua Stadelmaier and\nSebastian Pad\u00f3. 2019.", |
| "venue": "https://doi.org/10.18653/v1/W19-4816", |
| "url": null |
| } |
| }, |
| { |
| "164": { |
| "title": "An Operation Sequence Model for Explainable Neural\nMachine Translation. In Proceedings of the 2018\nEMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks\nfor NLP. Association for Computational Linguistics,\nBrussels, Belgium, 175\u2013186.", |
| "author": "Felix Stahlberg, Danielle\nSaunders, and Bill Byrne.\n2018.", |
| "venue": "https://doi.org/10.18653/v1/W18-5420", |
| "url": null |
| } |
| }, |
| { |
| "165": { |
| "title": "e-FEVER: Explanations and Summaries for Automated\nFact Checking. In TTO.", |
| "author": "Dominik Stammbach and\nElliott Ash. 2020.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "166": { |
| "title": "VL-BERT: Pre-training of Generic Visual-Linguistic\nRepresentations. In International Conference on\nLearning Representations.", |
| "author": "Weijie Su, Xizhou Zhu,\nYue Cao, Bin Li, Lewei\nLu, Furu Wei, and Jifeng Dai.\n2020.", |
| "venue": "https://openreview.net/forum?id=SygXPaEYvH", |
| "url": null |
| } |
| }, |
| { |
| "167": { |
| "title": "Axiomatic attribution for deep networks. In\nProceedings of the 34th International Conference on\nMachine Learning-Volume 70. JMLR. org, 3319\u20133328.", |
| "author": "Mukund Sundararajan, Ankur\nTaly, and Qiqi Yan. 2017.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "168": { |
| "title": "Interpretable Question Answering on Knowledge Bases\nand Text. In Proceedings of the 57th Annual\nMeeting of the Association for Computational Linguistics.\n4943\u20134951.", |
| "author": "Alona Sydorova, Nina\nPoerner, and Benjamin Roth.\n2019.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "169": { |
| "title": "What do you learn from context? Probing for\nsentence structure in contextualized word representations. In\nInternational Conference on Learning\nRepresentations.", |
| "author": "Ian Tenney, Patrick Xia,\nBerlin Chen, Alex Wang,\nAdam Poliak, R. Thomas McCoy,\nNajoung Kim, Benjamin Van Durme,\nSamuel R. Bowman, Dipanjan Das, and\nEllie Pavlick. 2019.", |
| "venue": "https://openreview.net/forum?id=SJzSgnRcKX", |
| "url": null |
| } |
| }, |
| { |
| "170": { |
| "title": "Select, Answer and Explain: Interpretable Multi-Hop\nReading Comprehension over Multiple Documents.. In\nAAAI. 9073\u20139080.", |
| "author": "Ming Tu, Kevin Huang,\nGuangtao Wang, Jing Huang,\nXiaodong He, and Bowen Zhou.\n2020.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "171": { |
| "title": "Iterative Recursive Attention Model for\nInterpretable Sequence Classification. In\nProceedings of the 2018 EMNLP Workshop\nBlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nBrussels, Belgium, 249\u2013257.", |
| "author": "Martin Tutek and Jan\n\u0160najder. 2018.", |
| "venue": "https://doi.org/10.18653/v1/W18-5427", |
| "url": null |
| } |
| }, |
| { |
| "172": { |
| "title": "Explaining Visual Classification using Attributes.\nIn 2019 International Conference on Content-Based\nMultimedia Indexing (CBMI). 1\u20136.", |
| "author": "Muneeb ul Hassan, Philippe\nMulhem, Denis Pellerin, and Georges\nQu\u00e9not. 2019.", |
| "venue": "https://doi.org/10.1109/CBMI.2019.8877393", |
| "url": null |
| } |
| }, |
| { |
| "173": { |
| "title": "Attention is all you need. In\nAdvances in neural information processing\nsystems. 5998\u20136008.", |
| "author": "Ashish Vaswani, Noam\nShazeer, Niki Parmar, Jakob Uszkoreit,\nLlion Jones, Aidan N Gomez,\n\u0141ukasz Kaiser, and Illia\nPolosukhin. 2017.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "174": { |
| "title": "Cider: Consensus-based image description\nevaluation. In Proceedings of the IEEE conference\non computer vision and pattern recognition. 4566\u20134575.", |
| "author": "Ramakrishna Vedantam, C\nLawrence Zitnick, and Devi Parikh.\n2015.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "175": { |
| "title": "Information-Theoretic Probing with Minimum\nDescription Length. In Proceedings of the 2020\nConference on Empirical Methods in Natural Language Processing (EMNLP).\nAssociation for Computational Linguistics,\nOnline, 183\u2013196.", |
| "author": "Elena Voita and Ivan\nTitov. 2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.emnlp-main.14", |
| "url": null |
| } |
| }, |
| { |
| "176": { |
| "title": "Does it Make Sense? And Why? A Pilot Study for\nSense Making and Explanation. In Proceedings of\nthe 57th Annual Meeting of the Association for Computational Linguistics.\nAssociation for Computational Linguistics,\nFlorence, Italy, 4020\u20134026.", |
| "author": "Cunxiang Wang, Shuailong\nLiang, Yue Zhang, Xiaonan Li, and\nTian Gao. 2019a.", |
| "venue": "https://doi.org/10.18653/v1/P19-1393", |
| "url": null |
| } |
| }, |
| { |
| "177": { |
| "title": "Aspect Sentiment Classification with both\nWord-level and Clause-level Attention Networks.. In\nIJCAI, Vol. 2018.\n4439\u20134445.", |
| "author": "Jingjing Wang, Jie Li,\nShoushan Li, Yangyang Kang,\nMin Zhang, Luo Si, and\nGuodong Zhou. 2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "178": { |
| "title": "Learning from Explanations with Neural Execution\nTree. In ICLR.", |
| "author": "Ziqi Wang, Yujia Qin,\nWenxuan Zhou, Jun Yan,\nQinyuan Ye, Leonardo Neves,\nZhiyuan Liu, and Xiang Ren.\n2020.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "179": { |
| "title": "Multi-Granular Text Encoding for Self-Explaining\nCategorization. In Proceedings of the 2019 ACL\nWorkshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nFlorence, Italy, 41\u201345.", |
| "author": "Zhiguo Wang, Yue Zhang,\nMo Yu, Wei Zhang, Lin\nPan, Linfeng Song, Kun Xu, and\nYousef El-Kurdi. 2019b.", |
| "venue": "https://doi.org/10.18653/v1/W19-4805", |
| "url": null |
| } |
| }, |
| { |
| "180": { |
| "title": "Finetuned Language Models are Zero-Shot Learners.\nIn International Conference on Learning\nRepresentations.", |
| "author": "Jason Wei, Maarten Bosma,\nVincent Zhao, Kelvin Guu,\nAdams Wei Yu, Brian Lester,\nNan Du, Andrew M. Dai, and\nQuoc V Le. 2022.", |
| "venue": "https://openreview.net/forum?id=gEZrGCozdqR", |
| "url": null |
| } |
| }, |
| { |
| "181": { |
| "title": "FLEX: Faithful Linguistic Explanations for Neural\nNet Based Model Decisions.", |
| "author": "Sandareka Wickramanayake,\nWynne Hsu, and Mong Li Lee.\n2019.", |
| "venue": "Proceedings of the AAAI Conference on\nArtificial Intelligence 33, 01\n(Jul. 2019), 2539\u20132546.", |
| "url": null |
| } |
| }, |
| { |
| "182": { |
| "title": "Teach Me to Explain: A Review of Datasets for\nExplainable Natural Language Processing. In\nThirty-fifth Conference on Neural Information\nProcessing Systems Datasets and Benchmarks Track (Round 1).", |
| "author": "Sarah Wiegreffe and Ana\nMarasovic. 2021.", |
| "venue": "https://openreview.net/forum?id=ogNcxJn32BZ", |
| "url": null |
| } |
| }, |
| { |
| "183": { |
| "title": "Measuring Association Between Labels and\nFree-Text Rationales. In Proceedings of the 2021\nConference on Empirical Methods in Natural Language Processing.\nAssociation for Computational Linguistics,\nOnline and Punta Cana, Dominican Republic,\n10266\u201310284.", |
| "author": "Sarah Wiegreffe, Ana\nMarasovi\u0107, and Noah A. Smith.\n2021.", |
| "venue": "https://doi.org/10.18653/v1/2021.emnlp-main.804", |
| "url": null |
| } |
| }, |
| { |
| "184": { |
| "title": "Attention is not not Explanation. In\nProceedings of the 2019 Conference on Empirical\nMethods in Natural Language Processing and the 9th International Joint\nConference on Natural Language Processing (EMNLP-IJCNLP).\nAssociation for Computational Linguistics,\nHong Kong, China, 11\u201320.", |
| "author": "Sarah Wiegreffe and\nYuval Pinter. 2019a.", |
| "venue": "https://doi.org/10.18653/v1/D19-1002", |
| "url": null |
| } |
| }, |
| { |
| "185": { |
| "title": "Attention is not not Explanation. In\nProceedings of the 2019 Conference on Empirical\nMethods in Natural Language Processing and the 9th International Joint\nConference on Natural Language Processing (EMNLP-IJCNLP).\n11\u201320.", |
| "author": "Sarah Wiegreffe and\nYuval Pinter. 2019b.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "186": { |
| "title": "Simple statistical gradient-following algorithms\nfor connectionist reinforcement learning.", |
| "author": "Ronald J Williams.\n1992.", |
| "venue": "Machine learning 8,\n3-4 (1992), 229\u2013256.", |
| "url": null |
| } |
| }, |
| { |
| "187": { |
| "title": "Faithful Multimodal Explanation for Visual Question\nAnswering. In Proceedings of the 2019 ACL Workshop\nBlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nFlorence, Italy, 103\u2013112.", |
| "author": "Jialin Wu and Raymond\nMooney. 2019.", |
| "venue": "https://doi.org/10.18653/v1/W19-4812", |
| "url": null |
| } |
| }, |
| { |
| "188": { |
| "title": "Polyjuice: Generating Counterfactuals for\nExplaining, Evaluating, and Improving Models. In\nProceedings of the 59th Annual Meeting of the\nAssociation for Computational Linguistics and the 11th International Joint\nConference on Natural Language Processing (Volume 1: Long Papers).\n6707\u20136723.", |
| "author": "Tongshuang Wu, Marco Tulio\nRibeiro, Jeffrey Heer, and Daniel S\nWeld. 2021.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "189": { |
| "title": "Show, attend and tell: Neural image caption\ngeneration with visual attention. In International\nconference on machine learning. 2048\u20132057.", |
| "author": "Kelvin Xu, Jimmy Ba,\nRyan Kiros, Kyunghyun Cho,\nAaron Courville, Ruslan Salakhudinov,\nRich Zemel, and Yoshua Bengio.\n2015.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "190": { |
| "title": "Stacked attention networks for image question\nanswering. In Proceedings of the IEEE conference\non computer vision and pattern recognition. 21\u201329.", |
| "author": "Zichao Yang, Xiaodong He,\nJianfeng Gao, Li Deng, and\nAlex Smola. 2016.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "191": { |
| "title": "HotpotQA: A Dataset for Diverse, Explainable\nMulti-hop Question Answering. In Proceedings of\nthe 2018 Conference on Empirical Methods in Natural Language Processing.\n2369\u20132380.", |
| "author": "Zhilin Yang, Peng Qi,\nSaizheng Zhang, Yoshua Bengio,\nWilliam Cohen, Ruslan Salakhutdinov,\nand Christopher D Manning.\n2018.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "192": { |
| "title": "Few-Shot Out-of-Domain Transfer Learning of Natural\nLanguage Explanations. In NeurIPS 2021 Workshop on\nDeep Generative Models and Downstream Applications.", |
| "author": "Yordan Yordanov, Vid\nKocijan, Thomas Lukasiewicz, and\nOana-Maria Camburu. 2021.", |
| "venue": "https://openreview.net/forum?id=g9PUonwGk2M", |
| "url": null |
| } |
| }, |
| { |
| "193": { |
| "title": "Stability.", |
| "author": "Bin Yu. 2013.", |
| "venue": "Bernoulli 19,\n4 (09 2013),\n1484\u20131500.", |
| "url": null |
| } |
| }, |
| { |
| "194": { |
| "title": "Rethinking Cooperative Rationalization:\nIntrospective Extraction and Complement Control. In\nProceedings of the 2019 Conference on Empirical\nMethods in Natural Language Processing and the 9th International Joint\nConference on Natural Language Processing (EMNLP-IJCNLP).\n4085\u20134094.", |
| "author": "Mo Yu, Shiyu Chang,\nYang Zhang, and Tommi Jaakkola.\n2019a.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "195": { |
| "title": "Deep modular co-attention networks for visual\nquestion answering. In Proceedings of the IEEE\nconference on computer vision and pattern recognition.\n6281\u20136290.", |
| "author": "Zhou Yu, Jun Yu,\nYuhao Cui, Dacheng Tao, and\nQi Tian. 2019b.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "196": { |
| "title": "Deconvolutional networks. In\n2010 IEEE Computer Society Conference on computer\nvision and pattern recognition. IEEE, 2528\u20132535.", |
| "author": "Matthew D Zeiler, Dilip\nKrishnan, Graham W Taylor, and Rob\nFergus. 2010.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "197": { |
| "title": "From Recognition to Cognition: Visual Commonsense\nReasoning. In The IEEE Conference on Computer\nVision and Pattern Recognition (CVPR).", |
| "author": "Rowan Zellers, Yonatan\nBisk, Ali Farhadi, and Yejin Choi.\n2019.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "198": { |
| "title": "MERLOT: Multimodal Neural Script Knowledge Models.\nIn Advances in Neural Information Processing\nSystems 34.", |
| "author": "Rowan Zellers, Ximing Lu,\nJack Hessel, Youngjae Yu,\nJae Sung Park, Jize Cao,\nAli Farhadi, and Yejin Choi.\n2021.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "199": { |
| "title": "WinoWhy: A Deep Diagnosis of Essential\nCommonsense Knowledge for Answering Winograd Schema Challenge. In\nProceedings of the 58th Annual Meeting of the\nAssociation for Computational Linguistics. Association\nfor Computational Linguistics, Online,\n5736\u20135745.", |
| "author": "Hongming Zhang, Xinran\nZhao, and Yangqiu Song.\n2020.", |
| "venue": "https://doi.org/10.18653/v1/2020.acl-main.508", |
| "url": null |
| } |
| }, |
| { |
| "200": { |
| "title": "Language Modeling Teaches You More than Translation\nDoes: Lessons Learned Through Auxiliary Syntactic Task Analysis. In\nProceedings of the 2018 EMNLP Workshop\nBlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.\nAssociation for Computational Linguistics,\nBrussels, Belgium, 359\u2013361.", |
| "author": "Kelly Zhang and Samuel\nBowman. 2018.", |
| "venue": "https://doi.org/10.18653/v1/W18-5448", |
| "url": null |
| } |
| }, |
| { |
| "201": { |
| "title": "Lirex: Augmenting language inference with relevant\nexplanations. In Proceedings of the AAAI\nConference on Artificial Intelligence, Vol. 35.\n14532\u201314539.", |
| "author": "Xinyan Zhao and VG Vinod\nVydiswaran. 2021.", |
| "venue": "", |
| "url": null |
| } |
| }, |
| { |
| "202": { |
| "title": "Towards Interpretable Natural Language\nUnderstanding with Explanations as Latent Variables. In\nAdvances in Neural Information Processing\nSystems, H. Larochelle,\nM. Ranzato, R. Hadsell,\nM.F. Balcan, and H. Lin (Eds.),\nVol. 33. Curran Associates, Inc.,\n6803\u20136814.", |
| "author": "Wangchunshu Zhou, Jinyi\nHu, Hanlin Zhang, Xiaodan Liang,\nMaosong Sun, Chenyan Xiong, and\nJian Tang. 2020.", |
| "venue": "https://proceedings.neurips.cc/paper/2020/file/4be2c8f27b8a420492f2d44463933eb6-Paper.pdf", |
| "url": null |
| } |
| }, |
| { |
| "203": { |
| "title": "DirectProbe: Studying Representations without\nClassifiers. In Proceedings of the 2021 Conference\nof the North American Chapter of the Association for Computational\nLinguistics: Human Language Technologies. Association\nfor Computational Linguistics, Online,\n5070\u20135083.", |
| "author": "Yichu Zhou and Vivek\nSrikumar. 2021.", |
| "venue": "https://doi.org/10.18653/v1/2021.naacl-main.401", |
| "url": null |
| } |
| }, |
| { |
| "204": { |
| "title": "Investigating the Effect of Natural Language\nExplanations on Out-of-Distribution Generalization in Few-shot NLI. In\nProceedings of the Second Workshop on Insights from\nNegative Results in NLP. Association for Computational\nLinguistics, Online and Punta Cana, Dominican Republic,\n117\u2013124.", |
| "author": "Yangqiaoyu Zhou and\nChenhao Tan. 2021.", |
| "venue": "https://doi.org/10.18653/v1/2021.insights-1.17", |
| "url": null |
| } |
| } |
| ], |
| "url": "http://arxiv.org/html/2103.11072v3" |
| } |