ACL-OCL / Base_JSON /prefixN /json /nlppower /2022.nlppower-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:01.106046Z"
},
"title": "Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Attanasio",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bocconi University",
"location": {
"settlement": "Milan",
"country": "Italy"
}
},
"email": "giuseppe.attanasio3@unibocconi.it"
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bocconi University",
"location": {
"settlement": "Milan",
"country": "Italy"
}
},
"email": "debora.nozza@unibocconi.it"
},
{
"first": "Eliana",
"middle": [],
"last": "Pastor",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Politecnico di Torino",
"location": {
"settlement": "Turin",
"country": "Italy"
}
},
"email": "eliana.pastor@polito.it"
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bocconi University",
"location": {
"settlement": "Milan",
"country": "Italy"
}
},
"email": "dirk.hovy@unibocconi.it"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Warning: This paper contains examples of language that some people may find offensive. Transformer-based Natural Language Processing models have become the standard for hate speech detection. However, the unconscious use of these techniques for such a critical task comes with negative consequences. Various works have demonstrated that hate speech classifiers are biased. These findings have prompted efforts to explain classifiers, mainly using attribution methods. In this paper, we provide the first benchmark study of interpretability approaches for hate speech detection. We cover four post-hoc token attribution approaches to explain the predictions of Transformer-based misogyny classifiers in English and Italian. Further, we compare generated attributions to attention analysis. We find that only two algorithms provide faithful explanations aligned with human expectations. Gradient-based methods and attention, however, show inconsistent outputs, making their value for explanations questionable for hate speech detection tasks.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Warning: This paper contains examples of language that some people may find offensive. Transformer-based Natural Language Processing models have become the standard for hate speech detection. However, the unconscious use of these techniques for such a critical task comes with negative consequences. Various works have demonstrated that hate speech classifiers are biased. These findings have prompted efforts to explain classifiers, mainly using attribution methods. In this paper, we provide the first benchmark study of interpretability approaches for hate speech detection. We cover four post-hoc token attribution approaches to explain the predictions of Transformer-based misogyny classifiers in English and Italian. Further, we compare generated attributions to attention analysis. We find that only two algorithms provide faithful explanations aligned with human expectations. Gradient-based methods and attention, however, show inconsistent outputs, making their value for explanations questionable for hate speech detection tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The advent of social media has proliferated hateful content online -with severe consequences for attacked users even in real life. Women are often attacked online. A study by Data & Society 1 of women between 15 to 29 years showed that 41% self-censored to avoid online harassment. Of those, 21% stopped using social media, 13% stopped going online, and 4% stopped using their mobile phone altogether. These numbers demonstrate the need for automatic misogyny detection systems for moderation purposes. Table 1 : Explanations generated by benchmarked methods. A fine-tuned BERT wrongly classifies the text as misogynous. Darker colors indicate higher importance.",
"cite_spans": [],
"ref_spans": [
{
"start": 503,
"end": 510,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Various Natural Language Processing (NLP) models have been proposed to detect and mitigate misogynous content (Basile et al., 2019; Indurthi et al., 2019; Lees et al., 2020; Fersini et al., 2020a; Safi Samghabadi et al., 2020; Attanasio and Pastor, 2020; Guest et al., 2021; Attanasio et al., 2022) . However, several papers already demonstrated that hate speech detection models suffer from unintended bias, resulting in harmful predictions for protected categories (e.g., women). Table 1 (top row) reports a very simple sentence that a stateof-the-art NLP model misclassifies as misogynous content.",
"cite_spans": [
{
"start": 110,
"end": 131,
"text": "(Basile et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 132,
"end": 154,
"text": "Indurthi et al., 2019;",
"ref_id": "BIBREF23"
},
{
"start": 155,
"end": 173,
"text": "Lees et al., 2020;",
"ref_id": "BIBREF31"
},
{
"start": 174,
"end": 196,
"text": "Fersini et al., 2020a;",
"ref_id": "BIBREF13"
},
{
"start": 197,
"end": 226,
"text": "Safi Samghabadi et al., 2020;",
"ref_id": null
},
{
"start": 227,
"end": 254,
"text": "Attanasio and Pastor, 2020;",
"ref_id": "BIBREF2"
},
{
"start": 255,
"end": 274,
"text": "Guest et al., 2021;",
"ref_id": "BIBREF19"
},
{
"start": 275,
"end": 298,
"text": "Attanasio et al., 2022)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 482,
"end": 500,
"text": "Table 1 (top row)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This issue shows the need to understand the rationale behind a given prediction. A mature literature on model interpretability with applications to NLP-specific approaches exists (Ross et al., 2021; Sanyal and Ren, 2021; Rajani et al., 2019, interalia) . 2 As explanations become part of legal regulations (Goodman and Flaxman, 2017) , a growing body of work has focused on the evaluation of explanation approaches (Nguyen, 2018; Hase and Bansal, 2020; Nguyen and Mart\u00ednez, 2020; Jacovi and Goldberg, 2020, inter-alia) . However, little guidance on which interpretability method suits best to the sensible context of misogyny identification has been given. For instance, some explanations in Table 1 hint to which token is wrongly driving the classification and even highlight a potential bias of the model. But not all of them.",
"cite_spans": [
{
"start": 179,
"end": 198,
"text": "(Ross et al., 2021;",
"ref_id": "BIBREF47"
},
{
"start": 199,
"end": 220,
"text": "Sanyal and Ren, 2021;",
"ref_id": "BIBREF50"
},
{
"start": 221,
"end": 252,
"text": "Rajani et al., 2019, interalia)",
"ref_id": null
},
{
"start": 255,
"end": 256,
"text": "2",
"ref_id": null
},
{
"start": 306,
"end": 333,
"text": "(Goodman and Flaxman, 2017)",
"ref_id": "BIBREF18"
},
{
"start": 415,
"end": 429,
"text": "(Nguyen, 2018;",
"ref_id": "BIBREF38"
},
{
"start": 430,
"end": 452,
"text": "Hase and Bansal, 2020;",
"ref_id": "BIBREF22"
},
{
"start": 453,
"end": 479,
"text": "Nguyen and Mart\u00ednez, 2020;",
"ref_id": "BIBREF37"
},
{
"start": 480,
"end": 518,
"text": "Jacovi and Goldberg, 2020, inter-alia)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 692,
"end": 699,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We bridge this gap. We benchmark interpretability approaches to explain state-of-the-art Transformer classifiers on the task of automatic misogyny identification. We cover two benchmark Twitter datasets for misogyny detection in English and Italian (Fersini et al., 2018 (Fersini et al., , 2020b . We focus on single-instance, post-hoc input attribution methods to measure the importance of each token for predicting the instance label. Our benchmark suite comprises gradient-based methods (Gradients (Simonyan et al., 2014) and Integrated Gradients (Sundararajan et al., 2017) ), Shapley values-based methods (SHAP (Lundberg and Lee, 2017)), and input occlusion (Sampling-And-Occlusion ). We evaluate explanations in terms of plausibility and faithfulness (Jacovi and Goldberg, 2020) . Table 1 reports an example of token-wise contribution computed with these methods. Furthermore, we study attention-based visualizations and compare them to token attribution methods searching for any correlation. To our knowledge, this is the first benchmarking study of feature attribution methods used to explain Transformer-based misogyny classifiers.",
"cite_spans": [
{
"start": 249,
"end": 270,
"text": "(Fersini et al., 2018",
"ref_id": "BIBREF14"
},
{
"start": 271,
"end": 295,
"text": "(Fersini et al., , 2020b",
"ref_id": "BIBREF15"
},
{
"start": 501,
"end": 524,
"text": "(Simonyan et al., 2014)",
"ref_id": "BIBREF53"
},
{
"start": 550,
"end": 577,
"text": "(Sundararajan et al., 2017)",
"ref_id": "BIBREF55"
},
{
"start": 757,
"end": 784,
"text": "(Jacovi and Goldberg, 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 787,
"end": 794,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our results show that SHAP and Sampling-And-Occlusion provide plausible and faithful explanations and are consequently recommended for explaining misogyny classifiers' outputs. We also find that, despite their popularity, gradient-and attention-based methods do not provide faithful explanations. Outputs of gradient-based explanation methods are inconsistent, while attention does not provide any useful insights for the classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions We benchmark four post-hoc explanation methods on two misogyny identification datasets across two languages, English and Italian. We evaluate explanations in terms of plausibility and faithfulness. We demonstrate that not every token attribution method provides reliable insights and that attention cannot serve as explanation. Code is available at https://github.c om/MilaNLProc/benchmarking-xai-m isogyny.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following, we describe the scope ( \u00a72.1) of our benchmarking study, the included methods ( \u00a72.2), and the evaluation criteria ( \u00a72.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmarking suite",
"sec_num": "2"
},
{
"text": "We consider local explanation methods (Lipton, 2018; Guidotti et al., 2019) . Given a classification model, a data point, and a target class, these methods explain the probability assigned to the class by the model. Global explanations provide model-or class-wise explanations and are hence out of the scope of this work.",
"cite_spans": [
{
"start": 38,
"end": 52,
"text": "(Lipton, 2018;",
"ref_id": "BIBREF32"
},
{
"start": 53,
"end": 75,
"text": "Guidotti et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scope",
"sec_num": "2.1"
},
{
"text": "Among local explanation methods, we focus on post-hoc interpretability, i.e., we explain classification models that have already been trained. We leave out inherently interpretable models (Rudin, 2019) as they do not find widespread use in NLPdriven practical applications.",
"cite_spans": [
{
"start": 188,
"end": 201,
"text": "(Rudin, 2019)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scope",
"sec_num": "2.1"
},
{
"text": "We restrict our study to input attribution methods. In Transformer-based language models, inputs typically correspond to the tokens' input embeddings (Madsen et al., 2021) . We, therefore, refer to token attribution methods to generate a contribution score for each input token (or word, resulting by some aggregation of sub-word token contributions).",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Madsen et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scope",
"sec_num": "2.1"
},
{
"text": "We benchmark three families of input token attribution methods. First, we derive token contribution using gradient attribution. These methods compute the gradient of the output with respect to each of the inputs. We compute simple gradient (G) (Simonyan et al., 2014) and integrated gradients (IG) (Sundararajan et al., 2017) . Then, we attribute inputs using approximated Shapley values (SHAP) (Lundberg and Lee, 2017). Finally, following the literature on input perturbation via occlusion, we impute input contributions using Sampling-And-Occlusion (SOC) . See appendix A.2 for all implementation details.",
"cite_spans": [
{
"start": 244,
"end": 267,
"text": "(Simonyan et al., 2014)",
"ref_id": "BIBREF53"
},
{
"start": 298,
"end": 325,
"text": "(Sundararajan et al., 2017)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2.2"
},
{
"text": "There is an open debate of whether attention is explanation or not (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Bastings and Filippova, 2020) . Our benchmarking study provides a perfect test-bed to understand if attention aligns with attribution methods. We compare standard self-attention with effective attention (Brunner et al., 2020; Sun and Marasovi\u0107, 2021 hidden representations using Hidden Token Attribution (HTA) (Brunner et al., 2020) .",
"cite_spans": [
{
"start": 67,
"end": 91,
"text": "(Jain and Wallace, 2019;",
"ref_id": "BIBREF26"
},
{
"start": 92,
"end": 119,
"text": "Wiegreffe and Pinter, 2019;",
"ref_id": "BIBREF60"
},
{
"start": 120,
"end": 149,
"text": "Bastings and Filippova, 2020)",
"ref_id": "BIBREF5"
},
{
"start": 323,
"end": 345,
"text": "(Brunner et al., 2020;",
"ref_id": "BIBREF7"
},
{
"start": 346,
"end": 369,
"text": "Sun and Marasovi\u0107, 2021",
"ref_id": "BIBREF54"
},
{
"start": 430,
"end": 452,
"text": "(Brunner et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": null
},
{
"text": "We use plausibility and faithfulness as evaluation criteria (Jacovi and Goldberg, 2020) . A \"plausible\" explanation should align with human beliefs. In our context, the provided explanation artifacts should convince humans that highlighted words are responsible for either misogynous speech or not. 3 A \"faithful\" explanation is a proxy for the true \"reasoning\" of the model. Gradient attributions are commonly considered faithful explanations as gradients provide a direct, mathematical measure of how variations in the input influences output. For the remaining attribution approaches, we measure faithfulness under the linearity assumption (Jacovi and Goldberg, 2020), i.e., the impact of certain parts of the input is independent of the rest. In our case, independent units correspond to input tokens. Following related work (Jacovi et al., 2018; Feng et al., 2018; Serrano and Smith, 2019 , inter-alia), we evaluate faithfulness by erasing input tokens and measuring the variation on the model prediction. Ideally, faithful interpretations highlight tokens that change the prediction the most.",
"cite_spans": [
{
"start": 60,
"end": 87,
"text": "(Jacovi and Goldberg, 2020)",
"ref_id": "BIBREF24"
},
{
"start": 829,
"end": 850,
"text": "(Jacovi et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 851,
"end": 869,
"text": "Feng et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 870,
"end": 893,
"text": "Serrano and Smith, 2019",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation criteria",
"sec_num": "2.3"
},
{
"text": "Automatic misogyny identification is the binary classification task to predict whether a text is misogynous or not. 4 We focus on two recentlyreleased datasets for misogynous content identification in English and Italian, released as part of the Automatic Misogyny Identification (AMI) shared tasks (Fersini et al., 2018 (Fersini et al., , 2020b . Both datasets have been collected via keyword-based search on Twitter. Table 2 reports the dataset statistics.",
"cite_spans": [
{
"start": 116,
"end": 117,
"text": "4",
"ref_id": null
},
{
"start": 299,
"end": 320,
"text": "(Fersini et al., 2018",
"ref_id": "BIBREF14"
},
{
"start": 321,
"end": 345,
"text": "(Fersini et al., , 2020b",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 419,
"end": 426,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2.4"
},
{
"text": "Among the Transformer-based models, we focus on BERT (Devlin et al., 2019) due to its widespread usage. We fine-tuned pre-trained BERT-based models on the AMI-EN and AMI-IT datasets. We report full details on the training in appendix A.1. Table 2 reports the macro-F1 performance of BERT models on the test splits. We explain BERT outputs on both tweets from test sets 5 and manually-generated data. On real data, we address two questions: 1) Is it right for the right reason?, i.e., we assess if the model relies on a plausible set of tokens; 2) What is the source of error?, i.e., we aim to identify tokens that wrongly drive the classification outcome. By explaining manually-defined texts, we can probe for model biases.",
"cite_spans": [
{
"start": 53,
"end": 74,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 239,
"end": 246,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3"
},
{
"text": "Tables 3-6 report token contributions computed with benchmarked approaches ( \u00a72.2). We report contributions for individual tokens. 6 We define table contents as follows. Separately by explanation method, we first generate raw contributions and then L1-normalize the vector. Finally, we use a linear color scale between solid blue (assigned for contribution -1), white (contribution 0), and solid red (contribution 1). For all reported examples, we explain the misogynous class. Hence, positive contributions indicate tokens pushing towards the misogynous class, while negative contributions push towards the non-misogynous one. Lastly, the second top row reports the variation on the probability assigned by the model when the corresponding token is erased (\u2206P ).",
"cite_spans": [
{
"start": 131,
"end": 132,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3"
},
{
"text": "Error analysis Table 3 shows the explanations for a tweet incorrectly predicted as misogynous. IG, SHAP, and SOC assign a negative contribution to the word boy. This matches our expectations since the target of the hateful comment is the male gender. These explanations are thus plausible. Still, the tweet is classified as misogynous. The tokens pu and ##ssy mainly drive the prediction to the misogynous class, as revealed by all explainers (SHAP and SOC in a clearer way). Ex- planations suggest the model is failing to assign the proper importance to the targeted gender of the hateful comment. These plausible explanations are also faithful. Removing the term boy increases the probability of the misogynous class while omitting tokens pu and ##ssy decrease it.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "We further analyze the term p*ssy and its role as a source of errors. Almost all tweets of the test set containing the term p*ssy are labeled by the model as misogynous. The false-positive rate on this set of tweets is 0.93 compared to the 0.49 of the overall test set. Similar considerations apply to English words typically associated with misogynous content as b*tch and wh*re.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Is it right for the right reason? Table 4 shows the explanation of a correctly predicted misogynous tweet. Gradient, SHAP, and SOC explanations assign a high positive contribution to slurs (b*tch, s*ck, and d*ck). These explanations align with human expectations. However, not all slurs impact the classification outcome. Explanations on b*tch are faithful but they are not for s*ck and d*ck. Differently, IG does not highlight any token with a positive contribution. This goes against expectations as the predicted class is misogynous and therefore we cannot draw conclusions.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "We study explanations to search for errors caused by unintended bias, a known phenomenon affecting models for misogynous identification. A model suffering from unintended bias performs better (or worse) when texts mention specific identity terms (e.g., woman) (Dixon et al., 2018) . Table 1 reports the non-misogynous text \"You are a smart woman\" incorrectly labeled as misogynous. SHAP, SOC, and, to a lesser extent, Gradient explanations indicate the term woman as responsible for the prediction. This result matches with recent findings on the unintended bias of hateful detection models Dixon et al., 2018; Borkan et al., 2019) and therefore explanations are plausible. Removing the term woman causes a drop of 0.48 to the probability of the misogynous class. This validates the insight provided by the explanations. Similar to the previous examples, the explanation of IG is difficult to interpret. Table 5 shows another example of unintended bias. The text \"Ann is in the kitchen\" is incorrectly labeled as misogynous. Gradients, SHAP, and SOC assign the highest positive contribution to the (commonly) female name Ann. Interestingly, the second most important word for Gradients and SHAP is kitchen, reflecting stereotypes learned by the classification model (Fersini et al., 2018) . These explanations are faithful: the model prediction drops by a significant 0.40 and 0.24 when erasing the tokens Ann and kitchen, respectively. We substitute the name Ann with David, a common male name. We observe that the prediction and the explanations drastically change. The text is correctly assigned to the non-misogynous class and IG, SHAP, and SOC assign a high negative contribution to the word David. The all-positive contributions of Gradients do not provide useful insights.",
"cite_spans": [
{
"start": 260,
"end": 280,
"text": "(Dixon et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 591,
"end": 610,
"text": "Dixon et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 611,
"end": 631,
"text": "Borkan et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 1266,
"end": 1288,
"text": "(Fersini et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 283,
"end": 290,
"text": "Table 1",
"ref_id": null
},
{
"start": 904,
"end": 911,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Unintended bias",
"sec_num": null
},
{
"text": "Bias due to language-specific expressions Table 6 (left) shows an example of incorrectly predicted misogynous text in Italian: \"p*rca p*ttana che gran pezzo di f*ga\" (\"holy sh*t what a nice piece of *ss\"). The expression \"p*rca p*ttana\" (literally pig sl*t) is a taboo interjection commonly used in the Italian language and does not imply misogynous speech.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 50,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Unintended bias",
"sec_num": null
},
{
"text": "The interpretation of the gradient explanation is hard since all contributions are positive and associated with the misogynous class. All explanation methods assign a positive contribution to the word f*ga (*ss). SHAP, SOC, and, to a lesser extent IG, indicate that the main reason behind the nonmisogynous prediction is the term p*rca. The bias of the model towards this expression was firstly exposed in (Nozza, 2021) and it thus validates IG, SHAP, and SOC explanations as plausible. When one of the two terms of the expression is removed, the probability increases significantly. This suggests that explanations by IG, SHAP, and SOC are faithful. Further, we inspect the behavior of explanation methods when we erase one of the terms. We omit the word p*rca and we report its explanations on Table 6 : Manually-generated example. Complete text (left) and text without initial \"p*rca\" (right). Non-literal translation: \"holy sh*t what a nice piece of *ss\". Ground truth (both): misogynous. Prediction: non-misogynous (P = 0.03) (left), misogynous (P = 0.97) (right).",
"cite_spans": [
{
"start": 406,
"end": 419,
"text": "(Nozza, 2021)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 796,
"end": 803,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Unintended bias",
"sec_num": null
},
{
"text": "f*ga (*ss) has the highest positive contribution for all the approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unintended bias",
"sec_num": null
},
{
"text": "We follow up on the open debate on attention used as an explanation, providing examples on the misogyny identification task. Figure 1 shows selfattention maps in our fine-tuned BERT at different layers and heads for the already discussed sentence \"You are a smart woman\". Based on our previous analysis ( \u00a74), we know that the model has an unintended bias towards the token \"woman\". We cannot infer the same information from attention maps. Raw attention weights differ significantly for different layers and heads. In this example, there is a vertical pattern (Kovaleva et al., 2019) on the token \"a\" in layer 3 (Figure 1a) . However, the pattern disappears from heads in the same layer ( Figure 1b) and from the same head on deeper layers, where, instead, a block pattern characterizes \"smart\" and \"woman\" (Figure 1c) . This variability hinders interpretability as no unique behavior emerges. Effective Attention (Brunner et al., 2020) is based on attention and shares the same issue. 7 These results further motivate the idea that attention gives only a local perspective on token contribution and contextualization (Bastings and Filippova, 2020) . However, this does not provide any useful insight for the classification task. To further validate this limited scope, we use Hidden Token Attribution (Brunner et al., 2020) and measure the contribution of each input token (i.e., its first-layer token embedding) to hidden representations. On lower layers, there is a marked diagonal contribution, meaning that tokens mainly contribute to their own representation. Interestingly, on the upper layers, a strong contribution to \"smart\" and \"woman\" appears for all the tokens in the sentence. Different patterns between HTA and attention suggest that, even in the locality of a layer and a single head, attention weights do not measure token contribution.",
"cite_spans": [
{
"start": 561,
"end": 584,
"text": "(Kovaleva et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 915,
"end": 937,
"text": "(Brunner et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 987,
"end": 988,
"text": "7",
"ref_id": null
},
{
"start": 1119,
"end": 1149,
"text": "(Bastings and Filippova, 2020)",
"ref_id": "BIBREF5"
},
{
"start": 1303,
"end": 1325,
"text": "(Brunner et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 125,
"end": 133,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 613,
"end": 624,
"text": "(Figure 1a)",
"ref_id": "FIGREF0"
},
{
"start": 690,
"end": 700,
"text": "Figure 1b)",
"ref_id": "FIGREF0"
},
{
"start": 808,
"end": 819,
"text": "(Figure 1c)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Is attention explanation?",
"sec_num": "4.1"
},
{
"text": "We observed similar issues on other examples and for Italian models (see appendix B). We there- fore cannot consider attention as a plausible nor a faithful explanation method and discourage the use of attention to explain BERT-based misogyny classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Is attention explanation?",
"sec_num": "4.1"
},
{
"text": "Few works applied interpretability approaches to hate speech detection. Wang (2018) proposes an adaptation of explainability techniques for computer vision to visualize and understand the CNN-GRU classifier for hate speech (Zhang et al., 2018 (Bach et al., 2015) , and a selfexplanatory model (LSTM with an attention mechanism). SHAP explainer is applied (Wich et al., 2020) to investigate the impact of political bias on hate speech classification. Sample-And-Occlusion (SOC) explanation algorithm has been used in its hierarchical version in different papers for showing the results of hate speech detection (Nozza, 2021; Kennedy et al., 2020) . In this paper, we specifically focus on hate speech against women. In this context, Godoy and Tommasel (2021) apply SHAP to derive global ex-planations with the aim of exploring unintended bias of Random Forest-based misogyny classifier.",
"cite_spans": [
{
"start": 72,
"end": 83,
"text": "Wang (2018)",
"ref_id": "BIBREF56"
},
{
"start": 223,
"end": 242,
"text": "(Zhang et al., 2018",
"ref_id": "BIBREF62"
},
{
"start": 243,
"end": 262,
"text": "(Bach et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 355,
"end": 374,
"text": "(Wich et al., 2020)",
"ref_id": "BIBREF59"
},
{
"start": 610,
"end": 623,
"text": "(Nozza, 2021;",
"ref_id": "BIBREF39"
},
{
"start": 624,
"end": 645,
"text": "Kennedy et al., 2020)",
"ref_id": "BIBREF28"
},
{
"start": 732,
"end": 757,
"text": "Godoy and Tommasel (2021)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "While growing efforts are made for evaluating interpretability approaches for NLP models (Atanasova et al., 2020; DeYoung et al., 2020; Prasad et al., 2021; Nguyen, 2018; Hase and Bansal, 2020; Nguyen and Mart\u00ednez, 2020; Jacovi and Goldberg, 2020) , the evaluation is not domainspecific. Therefore, the benchmarking miss to consider specific sensitive problems and biases that are proper of the hate speech domain on which the explanation validation must focus. This paper fills this gap by focusing on post-hoc feature attribution explanation methods on individual predictions for the task of hate speech against women.",
"cite_spans": [
{
"start": 89,
"end": 113,
"text": "(Atanasova et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 114,
"end": 135,
"text": "DeYoung et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 136,
"end": 156,
"text": "Prasad et al., 2021;",
"ref_id": "BIBREF43"
},
{
"start": 157,
"end": 170,
"text": "Nguyen, 2018;",
"ref_id": "BIBREF38"
},
{
"start": 171,
"end": 193,
"text": "Hase and Bansal, 2020;",
"ref_id": "BIBREF22"
},
{
"start": 194,
"end": 220,
"text": "Nguyen and Mart\u00ednez, 2020;",
"ref_id": "BIBREF37"
},
{
"start": 221,
"end": 247,
"text": "Jacovi and Goldberg, 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we benchmarked different explainability approaches on Transformer-based models for the task of hate speech detection against women in English and Italian. We focus on post-hoc feature attribution methods applied to fine-tuned BERT models. Our evaluation demonstrated that SHAP and SOC provide plausible and faithful explanations and are consequently recommended for explaining misogyny classifiers' outputs. In contrast, gradient-and attention-based approaches failed in providing reliable explanations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "As future work, we plan to add to the benchmarking suite a systematic evaluation involving human annotators. We also plan to include recently introduced token attribution methods (Sikdar et al., 2021) as well as new families of approaches, like natural language explanations (Rajani et al., 2019; Narang et al., 2020) and input editing (Ross et al., 2021) . Finally, we will assess explanations of the most problematic data subgroups (Goel et al., 2021; Pastor et al., 2021; Wang et al., 2021) . Politecnico di Torino. GA did part of the work as a member of the DBDMG and is currently a member of MilaNLP. Computing resources were provided by the SmartData@PoliTO center on Big Data and Data Science.",
"cite_spans": [
{
"start": 179,
"end": 200,
"text": "(Sikdar et al., 2021)",
"ref_id": "BIBREF52"
},
{
"start": 275,
"end": 296,
"text": "(Rajani et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 297,
"end": 317,
"text": "Narang et al., 2020)",
"ref_id": null
},
{
"start": 336,
"end": 355,
"text": "(Ross et al., 2021)",
"ref_id": "BIBREF47"
},
{
"start": 434,
"end": 453,
"text": "(Goel et al., 2021;",
"ref_id": "BIBREF17"
},
{
"start": 454,
"end": 474,
"text": "Pastor et al., 2021;",
"ref_id": "BIBREF42"
},
{
"start": 475,
"end": 493,
"text": "Wang et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We explain BERT-based classifiers using a controlled subset of a large, fast-growing collection of explanation methods available in the literature. While replicating our experiments with different approaches, or on different data samples, from different datasets or explaining different models, we cannot exclude that some people may find the explanations offensive or stereotypical. Further, recent work has demonstrated gradient-based explanations are manipulable , questioning the reliability of this widespread category of methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": null
},
{
"text": "We, therefore, advocate for responsible use of this benchmarking suite (or any product derived from it) and suggest pairing it with human-aided evaluation. Moreover, we encourage users to consider this work as a starting point for model debugging and the included explanation methods as baselines for future developments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": null
},
{
"text": "All our experiments use the Hugging Face transformers library (Wolf et al., 2020) .",
"cite_spans": [
{
"start": 62,
"end": 81,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Training hyper-parameters",
"sec_num": null
},
{
"text": "We base our models and tokenizers on the bert-base-cased checkpoint for English tasks and on the dbmdz/bert-base-italian-cased checkpoint for Italian. We pre-process and tokenize our data using the standard pre-trained BERT tokenizer, with a maximum sequence length of 128 and right padding. We train all models for 3 epochs with a batch size of 64, a linearly decaying learning rate of 5 \u2022 10 \u22125 and 10% of the total training step as a warmup, and full precision. We use 10% of training data for validation. We evaluate the model every 50 steps on the respective validation set. At the end of the training, we use the checkpoint with the best validation loss. We re-weight the standard cross-entropy loss using the inverse of class frequency to account for class imbalance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Training hyper-parameters",
"sec_num": null
},
{
"text": "We used the Captum library (Kokhlikyan et al., 2020) with default parameters to compute gradients (G) and integrated gradients (IG). Following (Han et al., 2020) , for IG we multiply gradients by input word embeddings. For Shapley values estimation (SHAP), we use the shap library 8 with Partition-SHAP as approximation method. For Sampling-And-Occlusion (SOC), we used the implementation associated with Kennedy et al. (2020) . 9 Please refer to our repository (https://github.com /MilaNLProc/benchmarking-xai-mis ogyny) for further technical details.",
"cite_spans": [
{
"start": 27,
"end": 52,
"text": "(Kokhlikyan et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 143,
"end": 161,
"text": "(Han et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 405,
"end": 426,
"text": "Kennedy et al. (2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Explanation methods",
"sec_num": null
},
{
"text": "We used attention weights provided by the transformers library for visualization. We implemented Effective Attention and Hidden Token Attribution following Brunner et al. (2020) . We release the implementation on our repository. Figure 2 shows attention visualizations for the sentence \"p*rca p*ttana che gran pezzo di f*ga\" 8 https://github.com/slundberg/shap 9 https://github.com/BrendanKennedy/co ntextualizing-hate-speech-models-with-ex planations (Non-literal translation: \"holy sh*t what a nice piece of *ss\"). As discussed in \u00a74 (Bias due to language-specific expressions), the text is misclassified as non-misogynous and most of explanation methods correctly highlight the Italian interjection \"p*rca p*ttana\".",
"cite_spans": [
{
"start": 156,
"end": 177,
"text": "Brunner et al. (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.3 Attention maps",
"sec_num": null
},
{
"text": "Similar to results reported in \u00a72.2, we cannot find useful insights in attention plots. Attention in layer 3 has a diagonal pattern in head 1, and a diagonal pattern in head 3 on the word che (\"what\"). However, these patterns disappear in layer 10 where attention is focused on p*rca. At layer 10, HTA is more spread than attention, suggesting that the latter measures only a local token contribution. Figure 2 : Attention (left), Effective Attention (center), and Hidden Token Attribution (right) maps at different layers in fine-tuned BERT. Lighter colors indicate higher weights. Sentence: \"p*rca p*ttana che gran pezzo di f*ga\", non-literal translation: \"holy sh*t what a nice piece of *ss\".",
"cite_spans": [],
"ref_spans": [
{
"start": 402,
"end": 410,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Attention plots",
"sec_num": null
},
{
"text": "https://www.datasociety.net/pubs/oh/ Online_Harassment_2016.pdf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We refer the reader toDanilevsky et al. (2020) and Madsen et al. (2021) for a recent, thorough perspective on explanation methods for NLP models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this study, the human expectation corresponds to the authors'.4 Characterizing misogyny is a much harder task, possibly modeling complex factors such as shaming, objectification, or more. Here, we simplify the task to focus on benchmarking interpretability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We rephrase and explain rephrased versions of tweets to protect privacy.6 While several work average sub-word contributions for out-of-vocabulary words, there is no general agreement on whether this brings meaningful results. Indeed, an average would assume a model that leverages tokens as a single unit, while there is no clear evidence of that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In most of our experiments, Effective Attention brings no perceptually different maps than simple Attention. The two methods are hence equivalent for local attention inspection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "(a) Layer 3, Head 1 (b) Layer 3, Head 3 (c) Layer 10, Head 1 (d) Layer 10, Head 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers and area chairs for their suggestion to strengthen the paper. This research is partially supported by funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (No. 949944, IN-TEGRATOR), and by Fondazione Cariplo (grant No. 2020-4288, MONICA). DN, and DH are members of the MilaNLP group, and of the Data and Marketing Insights Unit at the Bocconi Institute for Data Science and Analysis. EP is member of the DataBase and Data Mining Group (DBDMG) at",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A diagnostic study of explainability techniques for text classification",
"authors": [
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Jakob",
"middle": [
"Grue"
],
"last": "Simonsen",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Lioma",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "3256--3274",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.263"
]
},
"num": null,
"urls": [],
"raw_text": "Pepa Atanasova, Jakob Grue Simonsen, Christina Li- oma, and Isabelle Augenstein. 2020. A diagnostic study of explainability techniques for text classifi- cation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3256-3274, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Entropy-based attention regularization frees unintended bias mitigation from lists",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Attanasio",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Baralis",
"suffix": ""
}
],
"year": 2022,
"venue": "Findings of the Association for Computational Linguistics: ACL2022",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giuseppe Attanasio, Debora Nozza, Dirk Hovy, and Elena Baralis. 2022. Entropy-based attention regular- ization frees unintended bias mitigation from lists. In Findings of the Association for Computational Lin- guistics: ACL2022. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "PoliTeam @ AMI: Improving sentence embedding similarity with misogyny lexicons for automatic misogyny identificationin italian tweets",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Attanasio",
"suffix": ""
},
{
"first": "Eliana",
"middle": [],
"last": "Pastor",
"suffix": ""
}
],
"year": 2020,
"venue": "EVALITA Evaluation of NLP and Speech Tools for Italian -December 17th",
"volume": "",
"issue": "",
"pages": "48--54",
"other_ids": {
"DOI": [
"10.4000/books.aaccademia.6807"
]
},
"num": null,
"urls": [],
"raw_text": "Giuseppe Attanasio and Eliana Pastor. 2020. PoliTeam @ AMI: Improving sentence embedding similarity with misogyny lexicons for automatic misogyny iden- tificationin italian tweets. In Valerio Basile, Danilo Croce, Maria Maro, and Lucia C. Passaro, editors, EVALITA Evaluation of NLP and Speech Tools for Italian -December 17th, 2020, pages 48-54. Ac- cademia University Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Binder",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Klauschen",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
}
],
"year": 2015,
"venue": "PLOS ONE",
"volume": "10",
"issue": "7",
"pages": "1--46",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0130140"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Bach, Alexander Binder, Gr\u00e9goire Montavon, Frederick Klauschen, Klaus-Robert M\u00fcller, and Wo- jciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise rele- vance propagation. PLOS ONE, 10(7):1-46.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2007"
]
},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?",
"authors": [
{
"first": "Jasmijn",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Filippova",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "149--155",
"other_ids": {
"DOI": [
"10.18653/v1/2020.blackboxnlp-1.14"
]
},
"num": null,
"urls": [],
"raw_text": "Jasmijn Bastings and Katja Filippova. 2020. The ele- phant in the interpretability room: Why use attention as explanation when we have saliency methods? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 149-155, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Nuanced metrics for measuring unintended bias with real data for text classification",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Borkan",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2019,
"venue": "Companion Proceedings of The 2019 World Wide Web Conference, WWW '19",
"volume": "",
"issue": "",
"pages": "491--500",
"other_ids": {
"DOI": [
"10.1145/3308560.3317593"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced met- rics for measuring unintended bias with real data for text classification. In Companion Proceedings of The 2019 World Wide Web Conference, WWW '19, page 491-500, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On identifiability in transformers",
"authors": [
{
"first": "Gino",
"middle": [],
"last": "Brunner",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Damian",
"middle": [],
"last": "Pascual",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Richter",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wattenhofer",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wat- tenhofer. 2020. On identifiability in transformers. In 8th International Conference on Learning Represen- tations, ICLR 2020. OpenReview.net.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A survey of the state of explainable AI for natural language processing",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Danilevsky",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Kun Qian",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "Ban",
"middle": [],
"last": "Katsis",
"suffix": ""
},
{
"first": "Prithviraj",
"middle": [],
"last": "Kawas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "447--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Danilevsky, Kun Qian, Ranit Aharonov, Yan- nis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural lan- guage processing. In Proceedings of the 1st Confer- ence of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th Interna- tional Joint Conference on Natural Language Pro- cessing, pages 447-459, Suzhou, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "ERASER: A benchmark to evaluate rationalized NLP models",
"authors": [
{
"first": "Jay",
"middle": [],
"last": "Deyoung",
"suffix": ""
},
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Nazneen",
"middle": [],
"last": "Fatema Rajani",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Lehman",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4443--4458",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.408"
]
},
"num": null,
"urls": [],
"raw_text": "Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443-4458, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Measuring and mitigating unintended bias in text classification",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES '18",
"volume": "",
"issue": "",
"pages": "67--73",
"other_ids": {
"DOI": [
"10.1145/3278721.3278729"
]
},
"num": null,
"urls": [],
"raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Proceed- ings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES '18, page 67-73, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pathologies of neural models make interpretations difficult",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Shi Feng",
"suffix": ""
},
{
"first": "Alvin",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "I",
"middle": [
"I"
],
"last": "Grissom",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3719--3728",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1407"
]
},
"num": null,
"urls": [],
"raw_text": "Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3719-3728, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Profiling Italian misogynist: An empirical study",
"authors": [
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Boifava",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Workshop on Resources and Techniques for User and Author Profiling in Abusive Language",
"volume": "",
"issue": "",
"pages": "9--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabetta Fersini, Debora Nozza, and Giulia Boifava. 2020a. Profiling Italian misogynist: An empirical study. In Proceedings of the Workshop on Resources and Techniques for User and Author Profiling in Abu- sive Language, pages 9-13, Marseille, France. Euro- pean Language Resources Association (ELRA).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Overview of the EVALITA 2018 task on automatic misogyny identification (AMI)",
"authors": [
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabetta Fersini, Debora Nozza, and Paolo Rosso. 2018. Overview of the EVALITA 2018 task on au- tomatic misogyny identification (AMI). volume 12, page 59, Turin, Italy. CEUR.org.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "AMI @ EVALITA2020: Automatic misogyny identification",
"authors": [
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th evaluation campaign of Natural Language Processing and Speech tools for Italian",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabetta Fersini, Debora Nozza, and Paolo Rosso. 2020b. AMI @ EVALITA2020: Automatic misog- yny identification. In Proceedings of the 7th eval- uation campaign of Natural Language Processing and Speech tools for Italian (EVALITA 2020), Online. CEUR.org.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Is my model biased? exploring unintended bias in misogyny detection tasks",
"authors": [
{
"first": "Daniela",
"middle": [],
"last": "Godoy",
"suffix": ""
},
{
"first": "Antonela",
"middle": [],
"last": "Tommasel",
"suffix": ""
}
],
"year": 2021,
"venue": "AIofAI'21: 1st Workshop on Adverse Impacts and Collateral Effects of Artificial Intelligence Technologies",
"volume": "",
"issue": "",
"pages": "97--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniela Godoy and Antonela Tommasel. 2021. Is my model biased? exploring unintended bias in misog- yny detection tasks. In AIofAI'21: 1st Workshop on Adverse Impacts and Collateral Effects of Artificial Intelligence Technologies, pages 97-111.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Robustness gym: Unifying the NLP evaluation landscape",
"authors": [
{
"first": "Karan",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Vig",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Taschdjian",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations",
"volume": "",
"issue": "",
"pages": "42--55",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Karan Goel, Nazneen Fatema Rajani, Jesse Vig, Zachary Taschdjian, Mohit Bansal, and Christopher R\u00e9. 2021. Robustness gym: Unifying the NLP evaluation land- scape. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies: Demonstrations, pages 42-55, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "European union regulations on algorithmic decision-making and a \"right to explanation",
"authors": [
{
"first": "Bryce",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Flaxman",
"suffix": ""
}
],
"year": 2017,
"venue": "AI magazine",
"volume": "38",
"issue": "3",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryce Goodman and Seth Flaxman. 2017. European union regulations on algorithmic decision-making and a \"right to explanation\". AI magazine, 38(3):50- 57.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An expert annotated dataset for the detection of online misogyny",
"authors": [
{
"first": "Ella",
"middle": [],
"last": "Guest",
"suffix": ""
},
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Mittos",
"suffix": ""
},
{
"first": "Nishanth",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "Gareth",
"middle": [],
"last": "Tyson",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1336--1350",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.114"
]
},
"num": null,
"urls": [],
"raw_text": "Ella Guest, Bertie Vidgen, Alexandros Mittos, Nishanth Sastry, Gareth Tyson, and Helen Margetts. 2021. An expert annotated dataset for the detection of online misogyny. In Proceedings of the 16th Conference of the European Chapter of the Association for Compu- tational Linguistics: Main Volume, pages 1336-1350, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A survey of methods for explaining black box models",
"authors": [
{
"first": "Riccardo",
"middle": [],
"last": "Guidotti",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Monreale",
"suffix": ""
},
{
"first": "Salvatore",
"middle": [],
"last": "Ruggieri",
"suffix": ""
},
{
"first": "Franco",
"middle": [],
"last": "Turini",
"suffix": ""
},
{
"first": "Fosca",
"middle": [],
"last": "Giannotti",
"suffix": ""
},
{
"first": "Dino",
"middle": [],
"last": "Pedreschi",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM Computing Surveys",
"volume": "51",
"issue": "5",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3236009"
]
},
"num": null,
"urls": [],
"raw_text": "Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A survey of methods for explaining black box models. ACM Computing Surveys, 51(5):93:1-93:42.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Explaining black box predictions and unveiling data artifacts through influence functions",
"authors": [
{
"first": "Xiaochuang",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5553--5563",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.492"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining black box predictions and unveil- ing data artifacts through influence functions. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5553- 5563, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Evaluating explainable AI: Which algorithmic explanations help users predict model behavior?",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Hase",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5540--5552",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.491"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Hase and Mohit Bansal. 2020. Evaluating explain- able AI: Which algorithmic explanations help users predict model behavior? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5540-5552, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "FERMI at SemEval-2019 task 5: Using sentence embeddings to identify hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Vijayasaradhi",
"middle": [],
"last": "Indurthi",
"suffix": ""
},
{
"first": "Bakhtiyar",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chakravartula",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "70--74",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2009"
]
},
"num": null,
"urls": [],
"raw_text": "Vijayasaradhi Indurthi, Bakhtiyar Syed, Manish Shri- vastava, Nikhil Chakravartula, Manish Gupta, and Vasudeva Varma. 2019. FERMI at SemEval-2019 task 5: Using sentence embeddings to identify hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 70-74, Minneapo- lis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Jacovi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4198--4205",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.386"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Jacovi and Yoav Goldberg. 2020. Towards faith- fully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198-4205, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Understanding convolutional neural networks for text classification",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Jacovi",
"suffix": ""
},
{
"first": "Oren",
"middle": [
"Sar"
],
"last": "Shalom",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "56--65",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5408"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Jacovi, Oren Sar Shalom, and Yoav Goldberg. 2018. Understanding convolutional neural networks for text classification. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 56-65, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention is not Explanation",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3543--3556",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1357"
]
},
"num": null,
"urls": [],
"raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Towards hierarchical importance attribution: Explaining compositional semantics for neural sequence models",
"authors": [
{
"first": "Xisen",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhongyu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Junyi",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Xiangyang",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xisen Jin, Zhongyu Wei, Junyi Du, Xiangyang Xue, and Xiang Ren. 2020. Towards hierarchical importance attribution: Explaining compositional semantics for neural sequence models. In 8th International Confer- ence on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Contextualizing hate speech classifiers with post-hoc explanation",
"authors": [
{
"first": "Brendan",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Xisen",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Aida",
"middle": [
"Mostafazadeh"
],
"last": "Davani",
"suffix": ""
},
{
"first": "Morteza",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5435--5442",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.483"
]
},
"num": null,
"urls": [],
"raw_text": "Brendan Kennedy, Xisen Jin, Aida Mostafazadeh Da- vani, Morteza Dehghani, and Xiang Ren. 2020. Con- textualizing hate speech classifiers with post-hoc ex- planation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5435-5442, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Captum: A unified and generic model interpretability library for PyTorch",
"authors": [
{
"first": "Narine",
"middle": [],
"last": "Kokhlikyan",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Miglani",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bilal",
"middle": [],
"last": "Alsallakh",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Reynolds",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Melnikov",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Kliushkina",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Araya",
"suffix": ""
},
{
"first": "Siqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.07896"
]
},
"num": null,
"urls": [],
"raw_text": "Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, et al. 2020. Captum: A unified and generic model interpretability library for PyTorch. arXiv preprint arXiv:2009.07896.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Revealing the dark secrets of BERT",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4365--4374",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1445"
]
},
"num": null,
"urls": [],
"raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4365-4374, Hong Kong, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Jigsaw @ AMI and HaSpeeDe2: Fine-Tuning a Pre-Trained Comment-Domain BERT Model",
"authors": [
{
"first": "Alyssa",
"middle": [],
"last": "Lees",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Kivlichan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alyssa Lees, Jeffrey Sorensen, and Ian Kivlichan. 2020. Jigsaw @ AMI and HaSpeeDe2: Fine-Tuning a Pre- Trained Comment-Domain BERT Model. In Pro- ceedings of Seventh Evaluation Campaign of Natu- ral Language Processing and Speech Tools for Ital- ian. Final Workshop (EVALITA 2020), Bologna, Italy. CEUR.org.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery",
"authors": [
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
}
],
"year": 2018,
"venue": "Queue",
"volume": "16",
"issue": "3",
"pages": "31--57",
"other_ids": {
"DOI": [
"10.1145/3236386.3241340"
]
},
"num": null,
"urls": [],
"raw_text": "Zachary C. Lipton. 2018. The mythos of model in- terpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3):31-57.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A unified approach to interpreting model predictions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Su-In",
"middle": [],
"last": "Lundberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Siva Reddy, and Sarath Chandar. 2021. Post-hoc Interpretability for Neural NLP: A Survey",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Madsen",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/ARXIV.2108.04840"
],
"arXiv": [
"arXiv:2108.04840"
]
},
"num": null,
"urls": [],
"raw_text": "Andreas Madsen, Siva Reddy, and Sarath Chandar. 2021. Post-hoc Interpretability for Neural NLP: A Survey. arXiv preprint arXiv:2108.04840.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Understanding and interpreting the impact of user context in hate speech detection",
"authors": [
{
"first": "Edoardo",
"middle": [],
"last": "Mosca",
"suffix": ""
},
{
"first": "Maximilian",
"middle": [],
"last": "Wich",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Groh",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "91--102",
"other_ids": {
"DOI": [
"10.18653/v1/2021.socialnlp-1.8"
]
},
"num": null,
"urls": [],
"raw_text": "Edoardo Mosca, Maximilian Wich, and Georg Groh. 2021. Understanding and interpreting the impact of user context in hate speech detection. In Proceedings of the Ninth International Workshop on Natural Lan- guage Processing for Social Media, pages 91-102, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Noah Fiedel, and Karishma Malkan. 2020. WT5?! Training Text-to-Text Models to Explain their Predictions",
"authors": [
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/ARXIV.2004.14546"
],
"arXiv": [
"arXiv:2004.14546"
]
},
"num": null,
"urls": [],
"raw_text": "Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. 2020. WT5?! Training Text-to-Text Models to Explain their Predictions. arXiv preprint arXiv:2004.14546.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "On quantitative aspects of model interpretability",
"authors": [
{
"first": "An-Phi",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mar\u00eda Rodr\u00edguez",
"middle": [],
"last": "Mart\u00ednez",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.07584"
]
},
"num": null,
"urls": [],
"raw_text": "An-phi Nguyen and Mar\u00eda Rodr\u00edguez Mart\u00ednez. 2020. On quantitative aspects of model interpretability. arXiv preprint arXiv:2007.07584.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Comparing automatic and human evaluation of local explanations for text classification",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1069--1078",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1097"
]
},
"num": null,
"urls": [],
"raw_text": "Dong Nguyen. 2018. Comparing automatic and human evaluation of local explanations for text classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1069-1078, New Or- leans, Louisiana. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Exposing the limits of zero-shot cross-lingual hate speech detection",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "907--914",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-short.114"
]
},
"num": null,
"urls": [],
"raw_text": "Debora Nozza. 2021. Exposing the limits of zero-shot cross-lingual hate speech detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 907-914, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Pipelines for Social Bias Testing of Large Language Models",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the First Workshop on Challenges & Perspectives in Creating Large Language Models. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debora Nozza, Federico Bianchi, , and Dirk Hovy. 2022. Pipelines for Social Bias Testing of Large Language Models. In Proceedings of the First Workshop on Challenges & Perspectives in Creating Large Lan- guage Models. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Unintended bias in misogyny detection",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Volpetti",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE/WIC/ACM International Conference on Web Intelligence, WI '19",
"volume": "",
"issue": "",
"pages": "149--155",
"other_ids": {
"DOI": [
"10.1145/3350546.3352512"
]
},
"num": null,
"urls": [],
"raw_text": "Debora Nozza, Claudia Volpetti, and Elisabetta Fersini. 2019. Unintended bias in misogyny detection. In IEEE/WIC/ACM International Conference on Web Intelligence, WI '19, page 149-155, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Looking for trouble: Analyzing classifier behavior via pattern divergence",
"authors": [
{
"first": "Eliana",
"middle": [],
"last": "Pastor",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Luca De Alfaro",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baralis",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 International Conference on Management of Data",
"volume": "",
"issue": "",
"pages": "1400--1412",
"other_ids": {
"DOI": [
"10.1145/3448016.3457284"
]
},
"num": null,
"urls": [],
"raw_text": "Eliana Pastor, Luca de Alfaro, and Elena Baralis. 2021. Looking for trouble: Analyzing classifier behavior via pattern divergence. In Proceedings of the 2021 International Conference on Management of Data, page 1400-1412, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "To what extent do human explanations of model behavior align with actual model behavior?",
"authors": [
{
"first": "Grusha",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.18653/v1/2021.blackboxnlp-1.1"
]
},
"num": null,
"urls": [],
"raw_text": "Grusha Prasad, Yixin Nie, Mohit Bansal, Robin Jia, Douwe Kiela, and Adina Williams. 2021. To what ex- tent do human explanations of model behavior align with actual model behavior? In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 1-14, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Explain yourself! leveraging language models for commonsense reasoning",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4932--4942",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1487"
]
},
"num": null,
"urls": [],
"raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain your- self! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Why Should I Trust You?\": Explaining the Predictions of Any Classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco T\u00falio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {
"DOI": [
"10.1145/2939672.2939778"
]
},
"num": null,
"urls": [],
"raw_text": "Marco T\u00falio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"Why Should I Trust You?\": Ex- plaining the Predictions of Any Classifier. In Pro- ceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144. ACM.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Offensive language detection explained",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Ruff",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Krestel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "137--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian Risch, Robin Ruff, and Ralf Krestel. 2020. Offen- sive language detection explained. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 137-143, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Explaining NLP models via minimal contrastive editing (MiCE)",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Ana",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "3840--3852",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.336"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Ross, Ana Marasovi\u0107, and Matthew Peters. 2021. Explaining NLP models via minimal contrastive edit- ing (MiCE). In Findings of the Association for Com- putational Linguistics: ACL-IJCNLP 2021, pages 3840-3852, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Rudin",
"suffix": ""
}
],
"year": 2019,
"venue": "Nature Machine Intelligence",
"volume": "1",
"issue": "5",
"pages": "206--215",
"other_ids": {
"DOI": [
"10.1038/s42256-019-0048-x"
]
},
"num": null,
"urls": [],
"raw_text": "Cynthia Rudin. 2019. Stop explaining black box ma- chine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206-215.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Amitava Das, and Thamar Solorio. 2020. Aggression and misogyny detection using BERT: A multi-task approach",
"authors": [
{
"first": "Parth",
"middle": [],
"last": "Niloofar Safi Samghabadi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patwa",
"suffix": ""
},
{
"first": "Pykl",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Prerana",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "126--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niloofar Safi Samghabadi, Parth Patwa, Srinivas PYKL, Prerana Mukherjee, Amitava Das, and Thamar Solorio. 2020. Aggression and misogyny detection using BERT: A multi-task approach. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 126-131, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Discretized integrated gradients for explaining language models",
"authors": [
{
"first": "Soumya",
"middle": [],
"last": "Sanyal",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "10285--10299",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.805"
]
},
"num": null,
"urls": [],
"raw_text": "Soumya Sanyal and Xiang Ren. 2021. Discretized in- tegrated gradients for explaining language models. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 10285-10299, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Is attention interpretable?",
"authors": [
{
"first": "Sofia",
"middle": [],
"last": "Serrano",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2931--2951",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1282"
]
},
"num": null,
"urls": [],
"raw_text": "Sofia Serrano and Noah A. Smith. 2019. Is attention in- terpretable? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 2931-2951, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Integrated directional gradients: Feature interaction attribution for neural NLP models",
"authors": [
{
"first": "Sandipan",
"middle": [],
"last": "Sikdar",
"suffix": ""
},
{
"first": "Parantapa",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "Kieran",
"middle": [],
"last": "Heese",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "865--878",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.71"
]
},
"num": null,
"urls": [],
"raw_text": "Sandipan Sikdar, Parantapa Bhattacharya, and Kieran Heese. 2021. Integrated directional gradients: Fea- ture interaction attribution for neural NLP models. In Proceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 865-878, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Vedaldi",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2014,
"venue": "2nd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zis- serman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In 2nd International Conference on Learning Representations, ICLR 2014.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Effective attention sheds light on interpretability",
"authors": [
{
"first": "Kaiser",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ana",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "4126--4135",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.361"
]
},
"num": null,
"urls": [],
"raw_text": "Kaiser Sun and Ana Marasovi\u0107. 2021. Effective atten- tion sheds light on interpretability. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 4126-4135, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Axiomatic attribution for deep networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "3319--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceed- ings of the 34th International Conference on Machine Learning -Volume 70, ICML'17, page 3319-3328. JMLR.org.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Interpreting neural network hate speech classifiers",
"authors": [
{
"first": "Cindy",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "86--92",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5111"
]
},
"num": null,
"urls": [],
"raw_text": "Cindy Wang. 2018. Interpreting neural network hate speech classifiers. In Proceedings of the 2nd Work- shop on Abusive Language Online (ALW2), pages 86-92, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Gradient-based analysis of NLP models is manipulable",
"authors": [
{
"first": "Junlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Tuyls",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "247--258",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.24"
]
},
"num": null,
"urls": [],
"raw_text": "Junlin Wang, Jens Tuyls, Eric Wallace, and Sameer Singh. 2020. Gradient-based analysis of NLP mod- els is manipulable. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 247-258, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "TextFlint: Unified multilingual robustness evaluation toolkit for natural language processing",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yicheng",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiacheng",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Yongxin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Zexiong",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Qinzhuo",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhengyan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ruotian",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zichu",
"middle": [],
"last": "Fei",
"suffix": ""
},
{
"first": "Ruijian",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xingwu",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Yiding",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Qiyuan",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Zhihua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shan",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Bolin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Jinlan",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Minlong",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Xiaoqing",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Yaqian",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhongyu",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "347--355",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-demo.41"
]
},
"num": null,
"urls": [],
"raw_text": "Xiao Wang, Qin Liu, Tao Gui, Qi Zhang, Yicheng Zou, Xin Zhou, Jiacheng Ye, Yongxin Zhang, Rui Zheng, Zexiong Pang, Qinzhuo Wu, Zhengyan Li, Chong Zhang, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xingwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Shan Qin, Bolin Zhu, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaoqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, and Xuanjing Huang. 2021. TextFlint: Unified multilingual robustness evaluation toolkit for natural language processing. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 347-355, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Impact of politically biased data on hate speech classification",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Wich",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Groh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms",
"volume": "",
"issue": "",
"pages": "54--64",
"other_ids": {
"DOI": [
"10.18653/v1/2020.alw-1.7"
]
},
"num": null,
"urls": [],
"raw_text": "Maximilian Wich, Jan Bauer, and Georg Groh. 2020. Impact of politically biased data on hate speech clas- sification. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 54-64, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Attention is not not explanation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Detecting hate speech on twitter using a convolution-GRU based deep neural network",
"authors": [
{
"first": "Ziqi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"A"
],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "The Semantic Web -15th International Conference",
"volume": "10843",
"issue": "",
"pages": "745--760",
"other_ids": {
"DOI": [
"10.1007/978-3-319-93417-4_48"
]
},
"num": null,
"urls": [],
"raw_text": "Ziqi Zhang, David Robinson, and Jonathan A. Tep- per. 2018. Detecting hate speech on twitter using a convolution-GRU based deep neural network. In The Semantic Web -15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings, volume 10843 of Lecture Notes in Computer Science, pages 745-760. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Attention (left), Effective Attention (center), and Hidden Token Attribution (right) maps at different layers in fine-tuned BERT. Lighter colors indicate higher weights. Sentence: \"You are a smart woman\".",
"uris": null
},
"TABREF2": {
"num": null,
"text": "Summary of datasets in terms of the number of training, validation and test tweets, percentage of hateful records within the training split, and F1-score of BERT models on test sets.",
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"text": "Example from AMI-EN test set, anonymyzed text on first row. Ground truth: non misogynous. Prediction: misogynous (P = 0.78).",
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF5": {
"num": null,
"text": "(right). The text is correctly assigned to the misogynous class and the word",
"content": "<table><tr><td/><td>s*ck</td><td>a</td><td>d*ck</td><td>and</td><td>choke</td><td>you</td><td>b*tch</td></tr><tr><td colspan=\"2\">\u2206P (10 \u22122 ) -0.02</td><td>0.2</td><td>0.8</td><td>0.3</td><td>-0.1</td><td>0.03</td><td>-13.4</td></tr><tr><td>G IG SHAP SOC</td><td colspan=\"4\">0.10 -0.14 -0.16 -0.08 -0.05 0.08 0.14 0.07 0.24 -0.03 0.07 -0.05 0.20 -0.02 0.26 -0.02</td><td colspan=\"2\">0.08 -0.20 -0.22 0.10 0.05 -0.06 0.07 0.00</td><td>0.25 -0.16 0.50 0.29</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF6": {
"num": null,
"text": "Example from AMI-EN test set, anonymyzed text on first row. Ground truth: misogynous. Prediction: misogynous (P = 0.90).",
"content": "<table><tr><td/><td>Ann</td><td>is</td><td>in</td><td>the</td><td colspan=\"2\">kitchen David</td><td>is</td><td>in</td><td>the</td><td>kitchen</td></tr><tr><td colspan=\"2\">\u2206P (10 \u22122 ) -40.4</td><td>15.4</td><td colspan=\"2\">12.7 -12.6</td><td>-24.3</td><td>-1.0</td><td>8.0</td><td>-1.3</td><td>-5.8</td><td>-6.7</td></tr><tr><td>G IG SHAP SOC</td><td colspan=\"4\">0.25 -0.15 0.27 -0.31 -0.15 -0.01 0.16 0.08 0.10 0.18 0.12 -0.33 0.28 -0.19 -0.06 0.10</td><td>0.21 -0.22 0.27 0.07</td><td colspan=\"4\">0.19 -0.36 -0.29 -0.38 -0.19 -0.05 0.18 0.09 0.09 0.14 0.09 -0.25 -0.25 -0.11 -0.03 0.04</td><td>0.28 -0.17 0.09 0.05</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF7": {
"num": null,
"text": "",
"content": "<table><tr><td/><td colspan=\"2\">p*rca p*ttana</td><td>che</td><td colspan=\"2\">gran pezzo</td><td>di</td><td colspan=\"2\">f*ga p*ttana</td><td>che</td><td colspan=\"2\">gran pezzo</td><td>di</td><td>f*ga</td></tr><tr><td>\u2206P (10 \u22122 )</td><td>94.7</td><td>79.7</td><td>-0.8</td><td>-0.6</td><td>0.3</td><td>-0.7</td><td>-0.6</td><td>1.0</td><td>-2.3</td><td>-1.3</td><td>0.4</td><td>0.3</td><td>-22.9</td></tr><tr><td>G IG SHAP SOC</td><td>0.17 -0.25 -0.69 -0.56</td><td colspan=\"3\">0.15 -0.10 -0.09 -0.16 0.06 0.07 -0.01 0.01 0.05 -0.07 0.00 0.04</td><td colspan=\"3\">0.11 -0.04 0.05 0.05 -0.05 0.22 0.07 0.13 0.21 0.13 0.05 0.14</td><td colspan=\"3\">0.20 -0.12 -0.03 -0.25 0.08 0.10 0.15 0.10 0.13 0.00 0.05 0.07</td><td colspan=\"2\">0.14 0.11 0.10 0.04 -0.12 0.08 0.17 0.10</td><td>0.21 0.32 0.43 0.57</td></tr></table>",
"html": null,
"type_str": "table"
}
}
}
}