ACL-OCL / Base_JSON /prefixQ /json /Q19 /Q19-1004.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q19-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:09:54.054451Z"
},
"title": "Analysis Methods in Neural Language Processing: A Survey",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "",
"location": {}
},
"email": "belinkov@mit.edu"
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "",
"location": {}
},
"email": "glass@mit.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their featurerich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work. 1 See, for instance, Noah Smith's invited talk at ACL 2017: vimeo.com/234958746. See also a recent debate on this matter by Chris Manning and Yann LeCun: www. youtube.com/watch?v=fKk9KhGRBdI. (Videos accessed on December 11, 2018.) 2 See, for example, the NIPS 2017 debate: www.youtube. com/watch?v=2hW05ZfsUUo. (Accessed on December 11, 2018.) 3 Nevertheless, one could question how feasible such an analysis is; consider, for example, interpreting support vectors in high-dimensional support vector machines (SVMs).",
"pdf_parse": {
"paper_id": "Q19-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their featurerich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work. 1 See, for instance, Noah Smith's invited talk at ACL 2017: vimeo.com/234958746. See also a recent debate on this matter by Chris Manning and Yann LeCun: www. youtube.com/watch?v=fKk9KhGRBdI. (Videos accessed on December 11, 2018.) 2 See, for example, the NIPS 2017 debate: www.youtube. com/watch?v=2hW05ZfsUUo. (Accessed on December 11, 2018.) 3 Nevertheless, one could question how feasible such an analysis is; consider, for example, interpreting support vectors in high-dimensional support vector machines (SVMs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The rise of deep learning has transformed the field of natural language processing (NLP) in recent years. Models based on neural networks have obtained impressive improvements in various tasks, including language modeling (Mikolov et al., 2010; Jozefowicz et al., 2016) , syntactic parsing (Kiperwasser and Goldberg, 2016) , machine translation (MT) (Bahdanau et al., 2014; , and many other tasks; see Goldberg (2017) for example success stories.",
"cite_spans": [
{
"start": 222,
"end": 244,
"text": "(Mikolov et al., 2010;",
"ref_id": "BIBREF124"
},
{
"start": 245,
"end": 269,
"text": "Jozefowicz et al., 2016)",
"ref_id": "BIBREF98"
},
{
"start": 290,
"end": 322,
"text": "(Kiperwasser and Goldberg, 2016)",
"ref_id": "BIBREF104"
},
{
"start": 350,
"end": 373,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 402,
"end": 417,
"text": "Goldberg (2017)",
"ref_id": "BIBREF78"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This progress has been accompanied by a myriad of new neural network architectures. In many cases, traditional feature-rich systems are being replaced by end-to-end neural networks that aim to map input text to some output prediction. As end-to-end systems are gaining prevalence, one may point to two trends. First, some push back against the abandonment of linguistic knowledge and call for incorporating it inside the networks in different ways. 1 Others strive to better understand how NLP models work. This theme of analyzing neural networks has connections to the broader work on interpretability in machine learning, along with specific characteristics of the NLP field.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Why should we analyze our neural NLP models? To some extent, this question falls into the larger question of interpretability in machine learning, which has been the subject of much debate in recent years. 2 Arguments in favor of interpretability in machine learning usually mention goals like accountability, trust, fairness, safety, and reliability (Doshi-Velez and Kim, 2017; Lipton, 2016) . Arguments against interpretability typically stress performance as the most important desideratum. All these arguments naturally apply to machine learning applications in NLP.",
"cite_spans": [
{
"start": 351,
"end": 378,
"text": "(Doshi-Velez and Kim, 2017;",
"ref_id": "BIBREF49"
},
{
"start": 379,
"end": 392,
"text": "Lipton, 2016)",
"ref_id": "BIBREF116"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the context of NLP, this question needs to be understood in light of earlier NLP work, often referred to as feature-rich or feature-engineered systems. In some of these systems, features are more easily understood by humans-they can be morphological properties, lexical classes, syntactic categories, semantic relations, etc. In theory, one could observe the importance assigned by statistical NLP models to such features in order to gain a better understanding of the model. 3 In contrast, it is more difficult to understand what happens in an end-to-end neural network model that takes input (say, word embeddings) and generates an output (say, a sentence classification). Much of the analysis work thus aims to understand how linguistic concepts that were common as features in NLP systems are captured in neural networks.",
"cite_spans": [
{
"start": 479,
"end": 480,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As the analysis of neural networks for language is becoming more and more prevalent, neural networks in various NLP tasks are being analyzed; different network architectures and components are being compared, and a variety of new analysis methods are being developed. This survey aims to review and summarize this body of work, highlight current trends, and point to existing lacunae. It organizes the literature into several themes. Section 2 reviews work that targets a fundamental question: What kind of linguistic information is captured in neural networks? We also point to limitations in current methods for answering this question. Section 3 discusses visualization methods, and emphasizes the difficulty in evaluating visualization work. In Section 4, we discuss the compilation of challenge sets, or test suites, for fine-grained evaluation, a methodology that has old roots in NLP. Section 5 deals with the generation and use of adversarial examples to probe weaknesses of neural networks. We point to unique characteristics of dealing with text as a discrete input and how different studies handle them. Section 6 summarizes work on explaining model predictions, an important goal of interpretability research. This is a relatively underexplored area, and we call for more work in this direction. Section 7 mentions a few other methods that do not fall neatly into one of the above themes. In the conclusion, we summarize the main gaps and potential research directions for the field.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is accompanied by online supplementary materials that contain detailed references for studies corresponding to Sections 2, 4, and 5 (Tables SM1, SM2 , and SM3, respectively), available at https://boknilev.github.io/ nlp-analysis-methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 158,
"text": "(Tables SM1, SM2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Before proceeding, we briefly mention some earlier work of a similar spirit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A Historical Note Reviewing the vast literature on neural networks for language is beyond our scope. 4 However, we mention here a few representative studies that focused on analyzing such networks in order to illustrate how recent trends have roots that go back to before the recent deep learning revival. Rumelhart and McClelland (1986) built a feedforward neural network for learning the English past tense and analyzed its performance on a variety of examples and conditions. They were especially concerned with the performance over the course of training, as their goal was to model the past form acquisition in children. They also analyzed a scaled-down version having eight input units and eight output units, which allowed them to describe it exhaustively and examine how certain rules manifest in network weights.",
"cite_spans": [
{
"start": 306,
"end": 337,
"text": "Rumelhart and McClelland (1986)",
"ref_id": "BIBREF154"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In his seminal work on recurrent neural networks (RNNs), Elman trained networks on synthetic sentences in a language prediction task (Elman, 1989 (Elman, , 1990 (Elman, , 1991 . Through extensive analyses, he showed how networks discover the notion of a word when predicting characters; capture syntactic structures like number agreement; and acquire word representations that reflect lexical and syntactic categories. Similar analyses were later applied to other networks and tasks (Harris, 1990; Niklasson and Lin\u00e5ker, 2000; Pollack, 1990; Frank et al., 2013) . While Elman's work was limited in some ways, such as evaluating generalization or various linguistic phenomena-as Elman himself recognized (Elman, 1989 )-it introduced methods that are still relevant today: from visualizing network activations in time, through clustering words by hidden state activations, to projecting representations to dimensions that emerge as capturing properties like sentence number or verb valency. The sections on visualization (Section 3) and identifying linguistic information (Section 2) contain many examples for these kinds of analysis.",
"cite_spans": [
{
"start": 133,
"end": 145,
"text": "(Elman, 1989",
"ref_id": "BIBREF56"
},
{
"start": 146,
"end": 160,
"text": "(Elman, , 1990",
"ref_id": "BIBREF57"
},
{
"start": 161,
"end": 175,
"text": "(Elman, , 1991",
"ref_id": "BIBREF58"
},
{
"start": 483,
"end": 497,
"text": "(Harris, 1990;",
"ref_id": "BIBREF86"
},
{
"start": 498,
"end": 526,
"text": "Niklasson and Lin\u00e5ker, 2000;",
"ref_id": "BIBREF135"
},
{
"start": 527,
"end": 541,
"text": "Pollack, 1990;",
"ref_id": "BIBREF145"
},
{
"start": 542,
"end": 561,
"text": "Frank et al., 2013)",
"ref_id": "BIBREF64"
},
{
"start": 703,
"end": 715,
"text": "(Elman, 1989",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Captured in Neural Networks?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What Linguistic Information Is",
"sec_num": "2"
},
{
"text": "Neural network models in NLP are typically trained in an end-to-end manner on input-output pairs, without explicitly encoding linguistic features. Thus, a primary question is the following: What linguistic information is captured in neural networks? When examining answers to this question, it is convenient to consider three dimensions: which methods are used for conducting the analysis, what kind of linguistic information is sought, and which objects in the neural network are being investigated. Table SM1 (in the supplementary materials) categorizes relevant analysis work according to these criteria. In the next subsections, we discuss trends in analysis work along these lines, followed by a discussion of limitations of current approaches.",
"cite_spans": [],
"ref_spans": [
{
"start": 501,
"end": 510,
"text": "Table SM1",
"ref_id": null
}
],
"eq_spans": [],
"section": "What Linguistic Information Is",
"sec_num": "2"
},
{
"text": "The most common approach for associating neural network components with linguistic properties is to predict such properties from activations of the neural network. Typically, in this approach a neural network model is trained on some task (say, MT) and its weights are frozen. Then, the trained model is used for generating feature representations for another task by running it on a corpus with linguistic annotations and recording the representations (say, hidden state activations). Another classifier is then used for predicting the property of interest (say, part-of-speech [POS] tags). The performance of this classifier is used for evaluating the quality of the generated representations, and by proxy that of the original model. This kind of approach has been used in numerous papers in recent years; see Table SM1 for references. 5 It is referred to by various names, including ''auxiliary prediction tasks'' (Adi et al., 2017b) , ''diagnostic classifiers'' (Veldhoen et al., 2016) , and ''probing tasks'' (Conneau et al., 2018) .",
"cite_spans": [
{
"start": 839,
"end": 840,
"text": "5",
"ref_id": null
},
{
"start": 918,
"end": 937,
"text": "(Adi et al., 2017b)",
"ref_id": null
},
{
"start": 967,
"end": 990,
"text": "(Veldhoen et al., 2016)",
"ref_id": "BIBREF176"
},
{
"start": 1015,
"end": 1037,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 813,
"end": 822,
"text": "Table SM1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "2.1"
},
{
"text": "As an example of this approach, let us walk through an application to analyzing syntax in neural machine translation (NMT) by Shi et al. (2016b) . In this work, two NMT models were trained on standard parallel data-English\u2192 French and English\u2192German. The trained models (specifically, the encoders) were run on an annotated corpus and their hidden states were used for training a logistic regression classifier that predicts different syntactic properties. The authors concluded that the NMT encoders learn significant syntactic information at both word level and sentence level. They also compared representations at different encoding layers and found that ''local features are somehow preserved in the lower layer whereas more global, abstract information tends to be stored in the upper layer.'' These results demonstrate the kind of insights that the classification analysis may lead to, especially when comparing different models or model components.",
"cite_spans": [
{
"start": 126,
"end": 144,
"text": "Shi et al. (2016b)",
"ref_id": "BIBREF164"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2.1"
},
{
"text": "Other methods for finding correspondences between parts of the neural network and certain properties include counting how often attention weights agree with a linguistic property like anaphora resolution (Voita et al., 2018) or directly computing correlations between neural network activations and some property; for example, correlating RNN state activations with depth in a syntactic tree (Qian et al., 2016a) or with Melfrequency cepstral coefficient (MFCC) acoustic features (Wu and King, 2016) . Such correspondence may also be computed indirectly. For instance, Alishahi et al. (2017) defined an ABX discrimination task to evaluate how a neural model of speech (grounded in vision) encoded phonology. Given phoneme representations from different layers in their model, and three phonemes, A, B, and X, they compared whether the model representation for X is closer to A or B. This discrimination task enabled them to draw conclusions about which layers encoder phonology better, observing that lower layers generally encode more phonological information.",
"cite_spans": [
{
"start": 204,
"end": 224,
"text": "(Voita et al., 2018)",
"ref_id": "BIBREF177"
},
{
"start": 480,
"end": 499,
"text": "(Wu and King, 2016)",
"ref_id": "BIBREF185"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2.1"
},
{
"text": "Different kinds of linguistic information have been analyzed, ranging from basic properties like sentence length, word position, word presence, or simple word order, to morphological, syntactic, and semantic information. Phonetic/phonemic information, speaker information, and style and accent information have been studied in neural network models for speech, or in joint audio-visual models. See Table SM1 for references.",
"cite_spans": [],
"ref_spans": [
{
"start": 398,
"end": 407,
"text": "Table SM1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Linguistic Phenomena",
"sec_num": "2.2"
},
{
"text": "While it is difficult to synthesize a holistic picture from this diverse body of work, it appears that neural networks are able to learn a substantial amount of information on various linguistic phenomena. These models are especially successful at capturing frequent properties, while some rare properties are more difficult to learn. Linzen et al. (2016) , for instance, found that long short-term memory (LSTM) language models are able to capture subject-verb agreement in many common cases, while direct supervision is required for solving harder cases.",
"cite_spans": [
{
"start": 335,
"end": 355,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF115"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Phenomena",
"sec_num": "2.2"
},
{
"text": "Another theme that emerges in several studies is the hierarchical nature of the learned representations. We have already mentioned such findings regarding NMT (Shi et al., 2016b ) and a visually grounded speech model . Hierarchical representations of syntax were also reported to emerge in other RNN models (Blevins et al., 2018) .",
"cite_spans": [
{
"start": 159,
"end": 177,
"text": "(Shi et al., 2016b",
"ref_id": "BIBREF164"
},
{
"start": 307,
"end": 329,
"text": "(Blevins et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Phenomena",
"sec_num": "2.2"
},
{
"text": "Finally, a couple of papers discovered that models trained with latent trees perform better on natural language inference (NLI) (Williams et al., 2018; Maillard and Clark, 2018) than ones trained with linguistically annotated trees. Moreover, the trees in these models do not resemble syntactic trees corresponding to known linguistic theories, which casts doubts on the importance of syntax-learning in the underlying neural network. 6",
"cite_spans": [
{
"start": 128,
"end": 151,
"text": "(Williams et al., 2018;",
"ref_id": "BIBREF184"
},
{
"start": 152,
"end": 177,
"text": "Maillard and Clark, 2018)",
"ref_id": "BIBREF120"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Phenomena",
"sec_num": "2.2"
},
{
"text": "In terms of the object of study, various neural network components were investigated, including word embeddings, RNN hidden states or gate activations, sentence embeddings, and attention weights in sequence-to-sequence (seq2seq) models. Generally less work has analyzed convolutional neural networks in NLP, but see Jacovi et al. (2018) for a recent exception. In speech processing, researchers have analyzed layers in deep neural networks for speech recognition and different speaker embeddings. Some analysis has also been devoted to joint language-vision or audio-vision models, or to similarities between word embeddings and con volutional image representations. Table SM1 provides detailed references.",
"cite_spans": [
{
"start": 316,
"end": 336,
"text": "Jacovi et al. (2018)",
"ref_id": "BIBREF95"
}
],
"ref_spans": [
{
"start": 667,
"end": 676,
"text": "Table SM1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Network Components",
"sec_num": "2.3"
},
{
"text": "The classification approach may find that a certain amount of linguistic information is captured in the neural network. However, this does not necessarily mean that the information is used by the network. For example, Vanmassenhove et al. (2017) investigated aspect in NMT (and in phrase-based statistical MT). They trained a classifier on NMT sentence encoding vectors and found that they can accurately predict tense about 90% of the time. However, when evaluating the output translations, they found them to have the correct tense only 79% of the time. They interpreted this result to mean that ''part of the aspectual information is lost during decoding.'' Relatedly, C\u00edfka and Bojar (2018) compared the performance of various NMT models in terms of translation quality (BLEU) and representation quality (classification tasks). They found a negative correlation between the two, suggesting that high-quality systems may not be learning certain sentence meanings. In contrast, Artetxe et al. (2018) showed that word embeddings contain divergent linguistic information, which can be uncovered by applying a linear transformation on the learned embeddings. Their results suggest an alternative explanation, showing that ''embedding models are able to encode divergent linguistic information but have limits on how this information is surfaced.'' From a methodological point of view, most of the relevant analysis work is concerned with correlation: How correlated are neural network components with linguistic properties? What may be lacking is a measure of causation: How does the encoding of linguistic properties affect the system output? Giulianelli et al. (2018) make some headway on this question. They predicted number agreement from RNN hidden states and gates at different time steps. They then intervened in how the model processes the sentence by changing a hidden activation based on the difference between the prediction and the correct label. This improved agreement prediction accuracy, and the effect persisted over the course of the sentence, indicating that this information has an effect on the model. However, they did not report the effect on overall model quality, for example by measuring perplexity. Methods from causal inference may shed new light on some of these questions.",
"cite_spans": [
{
"start": 218,
"end": 245,
"text": "Vanmassenhove et al. (2017)",
"ref_id": "BIBREF175"
},
{
"start": 672,
"end": 694,
"text": "C\u00edfka and Bojar (2018)",
"ref_id": "BIBREF39"
},
{
"start": 980,
"end": 1001,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF9"
},
{
"start": 1643,
"end": 1668,
"text": "Giulianelli et al. (2018)",
"ref_id": "BIBREF75"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "2.4"
},
{
"text": "Finally, the predictor for the auxiliary task is usually a simple classifier, such as logistic regression. A few studies compared different classifiers and found that deeper classifiers lead to overall better results, but do not alter the respective trends when comparing different models or components (Qian et al., 2016b; Belinkov, 2018). Interestingly, Conneau et al. (2018) found that tasks requiring more nuanced linguistic knowledge (e.g., tree depth, coordination inversion) gain the most from using a deeper classifier. However, the approach is usually taken for granted; given its prevalence, it appears that better theoretical or empirical foundations are in place.",
"cite_spans": [
{
"start": 356,
"end": 377,
"text": "Conneau et al. (2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "2.4"
},
{
"text": "Visualization is a valuable tool for analyzing neural networks in the language domain and beyond. Early work visualized hidden unit activations in RNNs trained on an artificial language modeling task, and observed how they correspond to certain grammatical relations such as agreement (Elman, 1991) . Much recent work has focused on visualizing activations on specific examples in modern neural networks for language (Karpathy et al., 2015; K\u00e1d\u00e1r et al., 2017; Qian et al., 2016a; and speech (Wu and King, 2016; Nagamine et al., 2015; Wang et al., 2017b) . Figure 1 shows an example visualization of a neuron that captures position of words in a sentence. The heatmap uses blue and red colors for negative and positive activation values, respectively, enabling the user to quickly grasp the function of this neuron.",
"cite_spans": [
{
"start": 285,
"end": 298,
"text": "(Elman, 1991)",
"ref_id": "BIBREF58"
},
{
"start": 417,
"end": 440,
"text": "(Karpathy et al., 2015;",
"ref_id": "BIBREF100"
},
{
"start": 441,
"end": 460,
"text": "K\u00e1d\u00e1r et al., 2017;",
"ref_id": "BIBREF99"
},
{
"start": 461,
"end": 480,
"text": "Qian et al., 2016a;",
"ref_id": "BIBREF146"
},
{
"start": 492,
"end": 511,
"text": "(Wu and King, 2016;",
"ref_id": "BIBREF185"
},
{
"start": 512,
"end": 534,
"text": "Nagamine et al., 2015;",
"ref_id": "BIBREF131"
},
{
"start": 535,
"end": 554,
"text": "Wang et al., 2017b)",
"ref_id": "BIBREF182"
}
],
"ref_spans": [
{
"start": 557,
"end": 565,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Visualization",
"sec_num": "3"
},
{
"text": "The attention mechanism that originated in work on NMT (Bahdanau et al., 2014) also lends itself to a natural visualization. The alignments obtained via different attention mechanisms have produced visualizations ranging from tasks like NLI (Rockt\u00e4schel et al., 2016; Yin et al., 2016) , summarization (Rush et al., 2015) , MT post-editing (Jauregi Unanue et al., 2018), and morphological inflection (Aharoni and Goldberg, 2017) to matching users on social media (Tay et al., 2018) . Figure 2 reproduces a visualization of attention alignments from the original work by Bahdanau et al. Here grayscale values correspond to the weight of the attention between words in an English source sentence (columns) and its French translation (rows). As Bahdanau et al. explain, this visualization demonstrates that the NMT model learned a soft alignment between source and target words. Some aspects of word order may also be noticed, as in the reordering of noun and adjective when translating the phrase ''European Economic Area.'' Another line of work computes various saliency measures to attribute predictions to input features. The important or salient features can then be visualized in selected examples (Li et al., 2016a; Aubakirova and Bansal, 2016; Sundararajan et al., 2017; Arras et al., 2017a,b; Ding et al., 2017; Mudrakarta et al., 2018; Montavon et al., 2018; Godin et al., 2018) . Saliency can also be computed with respect to intermediate values, rather than input features (Ghaeini et al., 2018) . 7 An instructive visualization technique is to cluster neural network activations and compare them to some linguistic property. Early work clustered RNN activations, showing that they organize in lexical categories (Elman, 1989 (Elman, , 1990 . Similar techniques have been followed by others. Recent examples include clustering of sentence embeddings in an RNN encoder trained in a multitask learning scenario (Brunner et al., 2017) , and phoneme clusters in a joint audio-visual RNN model .",
"cite_spans": [
{
"start": 55,
"end": 78,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 241,
"end": 267,
"text": "(Rockt\u00e4schel et al., 2016;",
"ref_id": "BIBREF151"
},
{
"start": 268,
"end": 285,
"text": "Yin et al., 2016)",
"ref_id": "BIBREF187"
},
{
"start": 302,
"end": 321,
"text": "(Rush et al., 2015)",
"ref_id": "BIBREF155"
},
{
"start": 400,
"end": 428,
"text": "(Aharoni and Goldberg, 2017)",
"ref_id": "BIBREF2"
},
{
"start": 463,
"end": 481,
"text": "(Tay et al., 2018)",
"ref_id": "BIBREF173"
},
{
"start": 570,
"end": 585,
"text": "Bahdanau et al.",
"ref_id": null
},
{
"start": 742,
"end": 785,
"text": "Bahdanau et al. explain, this visualization",
"ref_id": null
},
{
"start": 1201,
"end": 1219,
"text": "(Li et al., 2016a;",
"ref_id": "BIBREF112"
},
{
"start": 1220,
"end": 1248,
"text": "Aubakirova and Bansal, 2016;",
"ref_id": "BIBREF10"
},
{
"start": 1249,
"end": 1275,
"text": "Sundararajan et al., 2017;",
"ref_id": "BIBREF168"
},
{
"start": 1276,
"end": 1298,
"text": "Arras et al., 2017a,b;",
"ref_id": null
},
{
"start": 1299,
"end": 1317,
"text": "Ding et al., 2017;",
"ref_id": "BIBREF48"
},
{
"start": 1318,
"end": 1342,
"text": "Mudrakarta et al., 2018;",
"ref_id": "BIBREF127"
},
{
"start": 1343,
"end": 1365,
"text": "Montavon et al., 2018;",
"ref_id": "BIBREF126"
},
{
"start": 1366,
"end": 1385,
"text": "Godin et al., 2018)",
"ref_id": "BIBREF77"
},
{
"start": 1482,
"end": 1504,
"text": "(Ghaeini et al., 2018)",
"ref_id": "BIBREF74"
},
{
"start": 1507,
"end": 1508,
"text": "7",
"ref_id": null
},
{
"start": 1722,
"end": 1734,
"text": "(Elman, 1989",
"ref_id": "BIBREF56"
},
{
"start": 1735,
"end": 1749,
"text": "(Elman, , 1990",
"ref_id": "BIBREF57"
},
{
"start": 1918,
"end": 1940,
"text": "(Brunner et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 484,
"end": 492,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Visualization",
"sec_num": "3"
},
{
"text": "A few online tools for visualizing neural networks have recently become available. LSTMVis (Strobelt et al., 2018b) visualizes RNN activations, focusing on tracing hidden state dynamics. 8 Seq2Seq-Vis (Strobelt et al., 2018a) visualizes different modules in attention-based seq2seq models, with the goal of examining model decisions and testing alternative decisions. Another tool focused on comparing attention alignments was proposed by Rikters (2018) . It also provides translation confidence scores based on the distribution of attention weights. NeuroX (Dalvi et al., 2019b ) is a tool for finding and analyzing individual neurons, focusing on machine translation.",
"cite_spans": [
{
"start": 91,
"end": 115,
"text": "(Strobelt et al., 2018b)",
"ref_id": "BIBREF167"
},
{
"start": 201,
"end": 225,
"text": "(Strobelt et al., 2018a)",
"ref_id": "BIBREF166"
},
{
"start": 439,
"end": 453,
"text": "Rikters (2018)",
"ref_id": "BIBREF149"
},
{
"start": 558,
"end": 578,
"text": "(Dalvi et al., 2019b",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visualization",
"sec_num": "3"
},
{
"text": "Evaluation As in much work on interpretability, evaluating visualization quality is difficult and often limited to qualitative examples. A few notable exceptions report human evaluations of visualization quality. showed human raters hierarchical clusterings of input words generated by two interpretation methods, and asked them to evaluate which method is more accurate, or in which method they trust more. Others reported human evaluations for attention visualization in conversation modeling (Freeman et al., 2018) and medical code prediction tasks (Mullenbach et al., 2018) .",
"cite_spans": [
{
"start": 495,
"end": 517,
"text": "(Freeman et al., 2018)",
"ref_id": "BIBREF65"
},
{
"start": 552,
"end": 577,
"text": "(Mullenbach et al., 2018)",
"ref_id": "BIBREF128"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visualization",
"sec_num": "3"
},
{
"text": "The availability of open-source tools of the sort described above will hopefully encourage users to utilize visualization in their regular research and development cycle. However, it remains to be seen how useful visualizations turn out to be.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visualization",
"sec_num": "3"
},
{
"text": "The majority of benchmark datasets in NLP are drawn from text corpora, reflecting a natural frequency distribution of language phenomena. While useful in practice for evaluating system performance in the average case, such datasets may fail to capture a wide range of phenomena. An alternative evaluation framework consists of challenge sets, also known as test suites, which have been used in NLP for a long time (Lehmann et al., 1996) , especially for evaluating MT systems (King and Falkedal, 1990; Isahara, 1995; Koh et al., 2001 ). Lehmann et al. (1996) noted several key properties of test suites: systematicity, control over data, inclusion of negative data, and exhaustivity. They contrasted such datasets with test corpora, ''whose main advantage is that they reflect naturally occurring data.'' This idea underlines much of the work on challenge sets and is echoed in more recent work . For instance, Cooper et al. (1996) constructed a semantic test suite that targets phenomena as diverse as quantifiers, plurals, anaphora, ellipsis, adjectival properties, and so on.",
"cite_spans": [
{
"start": 414,
"end": 436,
"text": "(Lehmann et al., 1996)",
"ref_id": "BIBREF109"
},
{
"start": 476,
"end": 501,
"text": "(King and Falkedal, 1990;",
"ref_id": "BIBREF102"
},
{
"start": 502,
"end": 516,
"text": "Isahara, 1995;",
"ref_id": "BIBREF93"
},
{
"start": 517,
"end": 533,
"text": "Koh et al., 2001",
"ref_id": "BIBREF105"
},
{
"start": 537,
"end": 558,
"text": "Lehmann et al. (1996)",
"ref_id": "BIBREF109"
},
{
"start": 911,
"end": 931,
"text": "Cooper et al. (1996)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Challenge Sets",
"sec_num": "4"
},
{
"text": "After a hiatus of a couple of decades, 9 challenge sets have recently gained renewed popularity in the NLP community. In this section, we include datasets used for evaluating neural network models that diverge from the common averagecase evaluation. Many of them share some of the properties noted by Lehmann et al. (1996) , although negative examples (ill-formed data) are typically less utilized. The challenge datasets can be categorized along the following criteria: the task they seek to evaluate, the linguistic phenomena they aim to study, the language(s) they target, their size, their method of construction, and how performance is evaluated. 10 Table SM2 (in the supplementary materials) categorizes many recent challenge sets along these criteria. Below we discuss common trends along these lines.",
"cite_spans": [
{
"start": 301,
"end": 322,
"text": "Lehmann et al. (1996)",
"ref_id": "BIBREF109"
},
{
"start": 652,
"end": 654,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 655,
"end": 664,
"text": "Table SM2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Challenge Sets",
"sec_num": "4"
},
{
"text": "By far, the most targeted tasks in challenge sets are NLI and MT. This can partly be explained by the popularity of these tasks and the prevalence of neural models proposed for solving them. Perhaps more importantly, tasks like NLI and MT arguably require inferences at various linguistic levels, making the challenge set evaluation especially attractive. Still, other high-level tasks like reading comprehension or question answering have not received as much attention, and may also benefit from the careful construction of challenge sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "4.1"
},
{
"text": "A significant body of work aims to evaluate the quality of embedding models by correlating the similarity they induce on word or sentence pairs with human similarity judgments. Datasets containing such similarity scores are often used to evaluate word embeddings (Finkelstein et al., 2002; Bruni et al., 2012; Hill et al., 2015, inter alia) or sentence embeddings; see the many shared tasks on semantic textual similarity in SemEval (Cer et al., 2017 , and previous editions). Many of these datasets evaluate similarity at a coarse-grained level, but some provide a more fine-grained evaluation of similarity or relatedness. For example, some datasets are dedicated for specific word classes such as verbs (Gerz et al., 2016) or rare words (Luong et al., 2013) , or for evaluating compositional knowledge in sentence embeddings (Marelli et al., 2014) . Multilingual and cross-lingual versions have also been collected (Leviant and Reichart, 2015; Cer et al., 2017) . Although these datasets are widely used, this kind of evaluation has been criticized for its subjectivity and questionable correlation with downstream performance (Faruqui et al., 2016) .",
"cite_spans": [
{
"start": 263,
"end": 289,
"text": "(Finkelstein et al., 2002;",
"ref_id": "BIBREF63"
},
{
"start": 290,
"end": 309,
"text": "Bruni et al., 2012;",
"ref_id": "BIBREF24"
},
{
"start": 310,
"end": 340,
"text": "Hill et al., 2015, inter alia)",
"ref_id": null
},
{
"start": 433,
"end": 450,
"text": "(Cer et al., 2017",
"ref_id": "BIBREF30"
},
{
"start": 706,
"end": 725,
"text": "(Gerz et al., 2016)",
"ref_id": "BIBREF72"
},
{
"start": 740,
"end": 760,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF119"
},
{
"start": 828,
"end": 850,
"text": "(Marelli et al., 2014)",
"ref_id": "BIBREF121"
},
{
"start": 918,
"end": 946,
"text": "(Leviant and Reichart, 2015;",
"ref_id": "BIBREF111"
},
{
"start": 947,
"end": 964,
"text": "Cer et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 1130,
"end": 1152,
"text": "(Faruqui et al., 2016)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "4.1"
},
{
"text": "One of the primary goals of challenge sets is to evaluate models on their ability to handle specific linguistic phenomena. While earlier studies emphasized exhaustivity (Cooper et al., 1996; Lehmann et al., 1996) , recent ones tend to focus on a few properties of interest. For example, Sennrich (2017) introduced a challenge set for MT evaluation focusing on five properties: subject-verb agreement, noun phrase agreement, verb-particle constructions, polarity, and transliteration. Slightly more elaborated is an MT challenge set for morphology, including 14 morphological properties (Burlot and Yvon, 2017) . See Table SM2 for references to datasets targeting other phenomena.",
"cite_spans": [
{
"start": 169,
"end": 190,
"text": "(Cooper et al., 1996;",
"ref_id": "BIBREF41"
},
{
"start": 191,
"end": 212,
"text": "Lehmann et al., 1996)",
"ref_id": "BIBREF109"
},
{
"start": 586,
"end": 609,
"text": "(Burlot and Yvon, 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 616,
"end": 625,
"text": "Table SM2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Linguistic Phenomena",
"sec_num": "4.2"
},
{
"text": "Other challenge sets cover a more diverse range of linguistic properties, in the spirit of some of the earlier work. For instance, extending the categories in Cooper et al. (1996) , the GLUE analysis set for NLI covers more than 30 phenomena in four coarse categories (lexical semantics, predicate-argument structure, logic, and knowledge). In MT evaluation, Burchardt et al. (2017) reported results using a large test suite covering 120 phenomena, partly based on Lehmann et al. (1996) . 11 Isabelle et al. (2017) and Isabelle and Kuhn (2018) prepared challenge sets for MT evaluation covering fine-grained phenomena at morpho-syntactic, syntactic, and lexical levels.",
"cite_spans": [
{
"start": 159,
"end": 179,
"text": "Cooper et al. (1996)",
"ref_id": "BIBREF41"
},
{
"start": 359,
"end": 382,
"text": "Burchardt et al. (2017)",
"ref_id": "BIBREF26"
},
{
"start": 465,
"end": 486,
"text": "Lehmann et al. (1996)",
"ref_id": "BIBREF109"
},
{
"start": 489,
"end": 491,
"text": "11",
"ref_id": null
},
{
"start": 492,
"end": 514,
"text": "Isabelle et al. (2017)",
"ref_id": "BIBREF91"
},
{
"start": 519,
"end": 543,
"text": "Isabelle and Kuhn (2018)",
"ref_id": "BIBREF92"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Phenomena",
"sec_num": "4.2"
},
{
"text": "Generally, datasets that are constructed programmatically tend to cover less fine-grained linguistic properties, while manually constructed datasets represent more diverse phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Phenomena",
"sec_num": "4.2"
},
{
"text": "As unfortunately usual in much NLP work, especially neural NLP, the vast majority of challenge sets are in English. This situation is slightly better in MT evaluation, where naturally all datasets feature other languages (see Table SM2 ). A notable exception is the work by Gulordava et al. (2018) , who constructed examples for evaluating number agreement in language modeling in English, Russian, Hebrew, and Italian. Clearly, there is room for more challenge sets in non-English languages. However, perhaps more pressing is the need for large-scale non-English datasets (besides MT) to develop neural models for popular NLP tasks.",
"cite_spans": [
{
"start": 274,
"end": 297,
"text": "Gulordava et al. (2018)",
"ref_id": "BIBREF82"
}
],
"ref_spans": [
{
"start": 226,
"end": 235,
"text": "Table SM2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Languages",
"sec_num": "4.3"
},
{
"text": "The size of proposed challenge sets varies greatly (Table SM2 ). As expected, datasets constructed by hand are smaller, with typical sizes in the hundreds. Automatically built datasets are much larger, ranging from several thousands to close to a hundred thousand (Sennrich, 2017) , or even more than one million examples (Linzen et al., 2016) . In the latter case, the authors argue that such a large test set is needed for obtaining a sufficient representation of rare cases. A few manually constructed datasets contain a fairly large number of examples, up to 10 thousand (Burchardt et al., 2017) .",
"cite_spans": [
{
"start": 264,
"end": 280,
"text": "(Sennrich, 2017)",
"ref_id": "BIBREF161"
},
{
"start": 322,
"end": 343,
"text": "(Linzen et al., 2016)",
"ref_id": "BIBREF115"
},
{
"start": 575,
"end": 599,
"text": "(Burchardt et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 51,
"end": 61,
"text": "(Table SM2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Scale",
"sec_num": "4.4"
},
{
"text": "Challenge sets are usually created either programmatically or manually, by handcrafting specific examples. Often, semi-automatic methods are used to compile an initial list of examples that is manually verified by annotators. The specific method also affects the kind of language use and how natural or artificial/synthetic the examples are. We describe here some trends in dataset construction methods in the hope that they may be useful for researchers contemplating new datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction Method",
"sec_num": "4.5"
},
{
"text": "Several datasets were constructed by modifying or extracting examples from existing datasets. For instance, Sanchez et al. (2018) and Glockner et al. (2018) extracted examples from SNLI (Bowman et al., 2015) and replaced specific words such as hypernyms, synonyms, and antonyms, followed by manual verification. Linzen et al. (2016) , on the other hand, extracted examples of subject-verb agreement from raw texts using heuristics, resulting in a large-scale dataset. Gulordava et al. (2018) extended this to other agreement phenomena, but they relied on syntactic information available in treebanks, resulting in a smaller dataset.",
"cite_spans": [
{
"start": 108,
"end": 129,
"text": "Sanchez et al. (2018)",
"ref_id": "BIBREF158"
},
{
"start": 134,
"end": 156,
"text": "Glockner et al. (2018)",
"ref_id": "BIBREF76"
},
{
"start": 186,
"end": 207,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 312,
"end": 332,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF115"
},
{
"start": 468,
"end": 491,
"text": "Gulordava et al. (2018)",
"ref_id": "BIBREF82"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Construction Method",
"sec_num": "4.5"
},
{
"text": "Several challenge sets utilize existing test suites, either as a direct source of examples (Burchardt et al., 2017) or for searching similar naturally occurring examples . 12 Sennrich (2017) introduced a method for evaluating NMT systems via contrastive translation pairs, where the system is asked to estimate the probability of two candidate translations that are designed to reflect specific linguistic properties. Sennrich generated such pairs programmatically by applying simple heuristics, such as changing gender and number to induce agreement errors, resulting in a large-scale challenge set of close to 100 thousand examples. This framework was extended to evaluate other properties, but often requiring more sophisticated generation methods like using morphological analyzers/ generators (Burlot and Yvon, 2017) or more manual involvement in generation (Bawden et al., 2018) or verification (Rios Gonzales et al., 2017) .",
"cite_spans": [
{
"start": 91,
"end": 115,
"text": "(Burchardt et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 172,
"end": 174,
"text": "12",
"ref_id": null
},
{
"start": 798,
"end": 821,
"text": "(Burlot and Yvon, 2017)",
"ref_id": "BIBREF27"
},
{
"start": 863,
"end": 884,
"text": "(Bawden et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 907,
"end": 929,
"text": "Gonzales et al., 2017)",
"ref_id": "BIBREF150"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Construction Method",
"sec_num": "4.5"
},
{
"text": "Finally, a few studies define templates that capture certain linguistic properties and instantiate them with word lists (Dasgupta et al., 2018; Zhao et al., 2018a) . Template-based generation has the advantage of providing more control, for example for obtaining a specific vocabulary distribution, but this comes at the expense of how natural the examples are.",
"cite_spans": [
{
"start": 120,
"end": 143,
"text": "(Dasgupta et al., 2018;",
"ref_id": "BIBREF46"
},
{
"start": 144,
"end": 163,
"text": "Zhao et al., 2018a)",
"ref_id": "BIBREF192"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Construction Method",
"sec_num": "4.5"
},
{
"text": "Systems are typically evaluated by their performance on the challenge set examples, either with the same metric used for evaluating the system in the first place, or via a proxy, as in the contrastive pairs evaluation of Sennrich (2017) . Automatic evaluation metrics are cheap to obtain and can be calculated on a large scale. However, they may miss certain aspects. Thus a few studies report human evaluation on their challenge sets, such as in MT (Isabelle et al., 2017; Burchardt et al., 2017) .",
"cite_spans": [
{
"start": 221,
"end": 236,
"text": "Sennrich (2017)",
"ref_id": "BIBREF161"
},
{
"start": 450,
"end": 473,
"text": "(Isabelle et al., 2017;",
"ref_id": "BIBREF91"
},
{
"start": 474,
"end": 497,
"text": "Burchardt et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.6"
},
{
"text": "We note here also that judging the quality of a model by its performance on a challenge set can be tricky. Some authors emphasize their wish to test systems on extreme or difficult cases, ''beyond normal operational capacity'' (Naik et al., 2018) . However, whether one should expect systems to perform well on specially chosen cases (as opposed to the average case) may depend on one's goals. To put results in perspective, one may compare model performance to human performance on the same task (Gulordava et al., 2018) .",
"cite_spans": [
{
"start": 227,
"end": 246,
"text": "(Naik et al., 2018)",
"ref_id": "BIBREF133"
},
{
"start": 497,
"end": 521,
"text": "(Gulordava et al., 2018)",
"ref_id": "BIBREF82"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.6"
},
{
"text": "Understanding a model also requires an understanding of its failures. Despite their success in many tasks, machine learning systems can also be very sensitive to malicious attacks or adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015) . In the vision domain, small changes to the input image can lead to misclassification, even if such changes are indistinguishable by humans.",
"cite_spans": [
{
"start": 204,
"end": 226,
"text": "(Szegedy et al., 2014;",
"ref_id": "BIBREF171"
},
{
"start": 227,
"end": 251,
"text": "Goodfellow et al., 2015)",
"ref_id": "BIBREF81"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Examples",
"sec_num": "5"
},
{
"text": "The basic setup in work on adversarial examples can be described as follows. 13 Given a neural network model f and an input example x, we seek to generate an adversarial example x that will have a minimal distance from x, while being assigned a different label by f :",
"cite_spans": [
{
"start": 77,
"end": 79,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Examples",
"sec_num": "5"
},
{
"text": "min x ||x \u2212 x || s.t. f (x) = l, f (x ) = l , l = l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Examples",
"sec_num": "5"
},
{
"text": "In the vision domain, x can be the input image pixels, resulting in a fairly intuitive interpretation of this optimization problem: measuring the distance ||x \u2212 x || is straightforward, and finding x can be done by computing gradients with respect to the input, since all quantities are continuous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Examples",
"sec_num": "5"
},
{
"text": "In the text domain, the input is discrete (for example, a sequence of words), which poses two problems. First, it is not clear how to measure the distance between the original and adversarial examples, x and x , which are two discrete objects (say, two words or sentences). Second, minimizing this distance cannot be easily formulated as an optimization problem, as this requires computing gradients with respect to a discrete input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Examples",
"sec_num": "5"
},
{
"text": "In the following, we review methods for handling these difficulties according to several criteria: the adversary's knowledge, the specificity of the attack, the linguistic unit being modified, and the task on which the attacked model was trained. 14 Table SM3 (in the supplementary materials) categorizes work on adversarial examples in NLP according to these criteria.",
"cite_spans": [
{
"start": 247,
"end": 249,
"text": "14",
"ref_id": null
}
],
"ref_spans": [
{
"start": 250,
"end": 259,
"text": "Table SM3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adversarial Examples",
"sec_num": "5"
},
{
"text": "Adversarial examples can be generated using access to model parameters, also known as white-box attacks, or without such access, with black-box attacks (Papernot et al., 2016a (Papernot et al., , 2017 Narodytska and Kasiviswanathan, 2017; .",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "(Papernot et al., 2016a",
"ref_id": null
},
{
"start": 176,
"end": 200,
"text": "(Papernot et al., , 2017",
"ref_id": "BIBREF138"
},
{
"start": 201,
"end": 238,
"text": "Narodytska and Kasiviswanathan, 2017;",
"ref_id": "BIBREF134"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversary's Knowledge",
"sec_num": "5.1"
},
{
"text": "White-box attacks are difficult to adapt to the text world as they typically require computing gradients with respect to the input, which would be discrete in the text case. One option is to compute gradients with respect to the input word embeddings, and perturb the embeddings. Since this may result in a vector that does not correspond to any word, one could search for the closest word embedding in a given dictionary (Papernot et al., 2016b) ; Cheng et al. (2018) extended this idea to seq2seq models. Others computed gradients with respect to input word embeddings to identify and rank words to be modified (Samanta and Mehta, 2017; Liang et al., 2018) . Ebrahimi et al. (2018b) developed an alternative method by representing text edit operations in vector space (e.g., a binary vector specifying which characters in a word would be changed) and approximating the change in loss with the derivative along this vector.",
"cite_spans": [
{
"start": 422,
"end": 446,
"text": "(Papernot et al., 2016b)",
"ref_id": null
},
{
"start": 449,
"end": 468,
"text": "Cheng et al. (2018)",
"ref_id": "BIBREF37"
},
{
"start": 613,
"end": 638,
"text": "(Samanta and Mehta, 2017;",
"ref_id": null
},
{
"start": 639,
"end": 658,
"text": "Liang et al., 2018)",
"ref_id": "BIBREF114"
},
{
"start": 661,
"end": 684,
"text": "Ebrahimi et al. (2018b)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversary's Knowledge",
"sec_num": "5.1"
},
{
"text": "Given the difficulty in generating white-box adversarial examples for text, much research has been devoted to black-box examples. Often, the adversarial examples are inspired by text edits that are thought to be natural or commonly generated by humans, such as typos, misspellings, and so on (Sakaguchi et al., 2017; Heigold et al., 2018; Belinkov and Bisk, 2018) . Gao et al. (2018) defined scoring functions to identify tokens to modify. Their functions do not require access to model internals, but they do require the model prediction score. After identifying the important tokens, they modify characters with common edit operations. Zhao et al. (2018c) used generative adversarial networks (GANs) to minimize the distance between latent representations of input and adversarial examples, and performed perturbations in latent space. Since the latent representations do not need to come from the attacked model, this is a black-box attack.",
"cite_spans": [
{
"start": 292,
"end": 316,
"text": "(Sakaguchi et al., 2017;",
"ref_id": "BIBREF156"
},
{
"start": 317,
"end": 338,
"text": "Heigold et al., 2018;",
"ref_id": "BIBREF88"
},
{
"start": 339,
"end": 363,
"text": "Belinkov and Bisk, 2018)",
"ref_id": "BIBREF15"
},
{
"start": 366,
"end": 383,
"text": "Gao et al. (2018)",
"ref_id": "BIBREF69"
},
{
"start": 638,
"end": 657,
"text": "Zhao et al. (2018c)",
"ref_id": "BIBREF194"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversary's Knowledge",
"sec_num": "5.1"
},
{
"text": "Finally, Alzantot et al. (2018) developed an interesting population-based genetic algorithm for crafting adversarial examples for text classification by maintaining a population of modifications of the original sentence and evaluating fitness of modifications at each generation. They do not require access to model parameters, but do use prediction scores. A similar idea was proposed by Kuleshov et al. (2018) .",
"cite_spans": [
{
"start": 9,
"end": 31,
"text": "Alzantot et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 389,
"end": 411,
"text": "Kuleshov et al. (2018)",
"ref_id": "BIBREF107"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversary's Knowledge",
"sec_num": "5.1"
},
{
"text": "Adversarial attacks can be classified to targeted vs. non-targeted attacks (Yuan et al., 2017) . A targeted attack specifies a specific false class, l , while a nontargeted attack cares only that the predicted class is wrong, l = l. Targeted attacks are more difficult to generate, as they typically require knowledge of model parameters; that is, they are white-box attacks. This might explain why the majority of adversarial examples in NLP are nontargeted (see Table SM3 ). A few targeted attacks include Liang et al. (2018) , which specified a desired class to fool a text classifier, and Chen et al. (2018a) , which specified words or captions to generate in an image captioning model. Others targeted specific words to omit, replace, or include when attacking seq2seq models (Cheng et al., 2018; Ebrahimi et al., 2018a) .",
"cite_spans": [
{
"start": 75,
"end": 94,
"text": "(Yuan et al., 2017)",
"ref_id": "BIBREF188"
},
{
"start": 508,
"end": 527,
"text": "Liang et al. (2018)",
"ref_id": "BIBREF114"
},
{
"start": 593,
"end": 612,
"text": "Chen et al. (2018a)",
"ref_id": "BIBREF34"
},
{
"start": 781,
"end": 801,
"text": "(Cheng et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 802,
"end": 825,
"text": "Ebrahimi et al., 2018a)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [
{
"start": 464,
"end": 473,
"text": "Table SM3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attack Specificity",
"sec_num": "5.2"
},
{
"text": "Methods for generating targeted attacks in NLP could possibly take more inspiration from adversarial attacks in other fields. For instance, in attacking malware detection systems, several studies developed targeted attacks in a blackbox scenario (Yuan et al., 2017) . A black-box targeted attack for MT was proposed by Zhao et al. (2018c) , who used GANs to search for attacks on Google's MT system after mapping sentences into continuous space with adversarially regularized autoencoders (Zhao et al., 2018b) .",
"cite_spans": [
{
"start": 246,
"end": 265,
"text": "(Yuan et al., 2017)",
"ref_id": "BIBREF188"
},
{
"start": 319,
"end": 338,
"text": "Zhao et al. (2018c)",
"ref_id": "BIBREF194"
},
{
"start": 489,
"end": 509,
"text": "(Zhao et al., 2018b)",
"ref_id": "BIBREF193"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attack Specificity",
"sec_num": "5.2"
},
{
"text": "Most of the work on adversarial text examples involves modifications at the character-and/or word-level; see Table SM3 for specific references. Other transformations include adding sentences or text chunks (Jia and Liang, 2017) or generating paraphrases with desired syntactic structures . In image captioning, Chen et al. (2018a) modified pixels in the input image to generate targeted attacks on the caption text.",
"cite_spans": [
{
"start": 206,
"end": 227,
"text": "(Jia and Liang, 2017)",
"ref_id": "BIBREF97"
},
{
"start": 311,
"end": 330,
"text": "Chen et al. (2018a)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 109,
"end": 118,
"text": "Table SM3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Linguistic Unit",
"sec_num": "5.3"
},
{
"text": "Generally, most work on adversarial examples in NLP concentrates on relatively high-level language understanding tasks, such as text classification (including sentiment analysis) and reading comprehension, while work on text generation focuses mainly on MT. See Table SM3 for references. There is relatively little work on adversarial examples for more low-level language processing tasks, although one can mention morphological tagging (Heigold et al., 2018) and spelling correction (Sakaguchi et al., 2017) .",
"cite_spans": [
{
"start": 437,
"end": 459,
"text": "(Heigold et al., 2018)",
"ref_id": "BIBREF88"
},
{
"start": 484,
"end": 508,
"text": "(Sakaguchi et al., 2017)",
"ref_id": "BIBREF156"
}
],
"ref_spans": [
{
"start": 262,
"end": 271,
"text": "Table SM3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task",
"sec_num": "5.4"
},
{
"text": "In adversarial image examples, it is fairly straightforward to measure the perturbation, either by measuring distance in pixel space, say ||x \u2212 x || under some norm, or with alternative measures that are better correlated with human perception (Rozsa et al., 2016) . It is also visually compelling to present an adversarial image with imperceptible difference from its source image.",
"cite_spans": [
{
"start": 244,
"end": 264,
"text": "(Rozsa et al., 2016)",
"ref_id": "BIBREF152"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence and Perturbation Measurement",
"sec_num": "5.5"
},
{
"text": "In the text domain, measuring distance is not as straightforward, and even small changes to the text may be perceptible by humans. Thus, evaluation of attacks is fairly tricky. Some studies imposed constraints on adversarial examples to have a small number of edit operations (Gao et al., 2018) . Others ensured syntactic or semantic coherence in different ways, such as filtering replacements by word similarity or sentence similarity (Alzantot et al., 2018; Kuleshov et al., 2018) , or by using synonyms and other word lists (Samanta and Mehta, 2017; Yang et al., 2018) . Some reported whether a human can classify the adversarial example correctly (Yang et al., 2018) , but this does not indicate how perceptible the changes are. More informative human studies evaluate grammaticality or similarity of the adversarial examples to the original ones (Zhao et al., 2018c; Alzantot et al., 2018) . Given the inherent difficulty in generating imperceptible changes in text, more such evaluations are needed.",
"cite_spans": [
{
"start": 276,
"end": 294,
"text": "(Gao et al., 2018)",
"ref_id": "BIBREF69"
},
{
"start": 436,
"end": 459,
"text": "(Alzantot et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 460,
"end": 482,
"text": "Kuleshov et al., 2018)",
"ref_id": "BIBREF107"
},
{
"start": 527,
"end": 552,
"text": "(Samanta and Mehta, 2017;",
"ref_id": null
},
{
"start": 553,
"end": 571,
"text": "Yang et al., 2018)",
"ref_id": "BIBREF186"
},
{
"start": 651,
"end": 670,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF186"
},
{
"start": 851,
"end": 871,
"text": "(Zhao et al., 2018c;",
"ref_id": "BIBREF194"
},
{
"start": 872,
"end": 894,
"text": "Alzantot et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence and Perturbation Measurement",
"sec_num": "5.5"
},
{
"text": "Explaining specific predictions is recognized as a desideratum in intereptability work (Lipton, 2016) , argued to increase the accountability of machine learning systems . However, explaining why a deep, highly non-linear neural network makes a certain prediction is not trivial. One solution is to ask the model to generate explanations along with its primary prediction (Zaidan et al., 2007; Zhang et al., 2016) , 15 but this approach requires manual annotations of explanations, which may be hard to collect.",
"cite_spans": [
{
"start": 87,
"end": 101,
"text": "(Lipton, 2016)",
"ref_id": "BIBREF116"
},
{
"start": 372,
"end": 393,
"text": "(Zaidan et al., 2007;",
"ref_id": "BIBREF189"
},
{
"start": 394,
"end": 413,
"text": "Zhang et al., 2016)",
"ref_id": "BIBREF191"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining Predictions",
"sec_num": "6"
},
{
"text": "An alternative approach is to use parts of the input as explanations. For example, Lei et al. (2016) defined a generator that learns a distribution over text fragments as candidate rationales for justifying predictions, evaluated on sentiment analysis. Alvarez-Melis and Jaakkola (2017) discovered input-output associations in a sequence-to-sequence learning scenario, by perturbing the input and finding the most relevant associations. Gupta and Sch\u00fctze (2018) inspected how information is accumulated in RNNs towards a prediction, and associated peaks in prediction scores with important input segments. As these methods use input segments to explain predictions, they do not shed much light on the internal computations that take place in the network.",
"cite_spans": [
{
"start": 83,
"end": 100,
"text": "Lei et al. (2016)",
"ref_id": "BIBREF110"
},
{
"start": 437,
"end": 461,
"text": "Gupta and Sch\u00fctze (2018)",
"ref_id": "BIBREF84"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining Predictions",
"sec_num": "6"
},
{
"text": "At present, despite the recognized importance for interpretability, our ability to explain predictions of neural networks in NLP is still limited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining Predictions",
"sec_num": "6"
},
{
"text": "We briefly mention here several analysis methods that do not fall neatly into the previous sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Methods",
"sec_num": "7"
},
{
"text": "A number of studies evaluated the effect of erasing or masking certain neural network components, such as word embedding dimensions, hidden units, or even full words (Li et al., 2016b; Feng et al., 2018; Khandelwal et al., 2018; Bau et al., 2018) . For example, Li et al. (2016b) erased specific dimensions in word embeddings or hidden states and computed the change in probability assigned to different labels. Their experiments revealed interesting differences between word embedding models, where in some models information is more focused in individual dimensions. They also found that information is more distributed in hidden layers than in the input layer, and erased entire words to find important words in a sentiment analysis task.",
"cite_spans": [
{
"start": 166,
"end": 184,
"text": "(Li et al., 2016b;",
"ref_id": "BIBREF113"
},
{
"start": 185,
"end": 203,
"text": "Feng et al., 2018;",
"ref_id": "BIBREF62"
},
{
"start": 204,
"end": 228,
"text": "Khandelwal et al., 2018;",
"ref_id": "BIBREF101"
},
{
"start": 229,
"end": 246,
"text": "Bau et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 262,
"end": 279,
"text": "Li et al. (2016b)",
"ref_id": "BIBREF113"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other Methods",
"sec_num": "7"
},
{
"text": "Several studies conducted behavioral experiments to interpret word embeddings by defining intrusion tasks, where humans need to identify an intruder word, chosen based on difference in word embedding dimensions (Murphy et al., 2012; Fyshe et al., 2015; Faruqui et al., 2015) . 16 In this kind of work, a word embedding model may be deemed more interpretable if humans are better able to identify the intruding words. Since the evaluation is costly for high-dimensional representations, alternative automatic metrics were considered (Park et al., 2017; Senel et al., 2018) .",
"cite_spans": [
{
"start": 211,
"end": 232,
"text": "(Murphy et al., 2012;",
"ref_id": "BIBREF130"
},
{
"start": 233,
"end": 252,
"text": "Fyshe et al., 2015;",
"ref_id": "BIBREF66"
},
{
"start": 253,
"end": 274,
"text": "Faruqui et al., 2015)",
"ref_id": "BIBREF61"
},
{
"start": 277,
"end": 279,
"text": "16",
"ref_id": null
},
{
"start": 532,
"end": 551,
"text": "(Park et al., 2017;",
"ref_id": "BIBREF141"
},
{
"start": 552,
"end": 571,
"text": "Senel et al., 2018)",
"ref_id": "BIBREF160"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other Methods",
"sec_num": "7"
},
{
"text": "A long tradition in work on neural networks is to evaluate and analyze their ability to learn different formal languages (Das et al., 1992; Casey, 1996; Gers and Schmidhuber, 2001; Bod\u00e9n and Wiles, 2002; Chalup and Blair, 2003) . This trend continues today, with research into modern architectures and what formal languages they can learn (Weiss et al., 2018; Bernardy, 2018; Suzgun et al., 2019) , or the formal properties they possess (Chen et al., 2018b) .",
"cite_spans": [
{
"start": 121,
"end": 139,
"text": "(Das et al., 1992;",
"ref_id": "BIBREF45"
},
{
"start": 140,
"end": 152,
"text": "Casey, 1996;",
"ref_id": "BIBREF29"
},
{
"start": 153,
"end": 180,
"text": "Gers and Schmidhuber, 2001;",
"ref_id": "BIBREF71"
},
{
"start": 181,
"end": 203,
"text": "Bod\u00e9n and Wiles, 2002;",
"ref_id": "BIBREF22"
},
{
"start": 204,
"end": 227,
"text": "Chalup and Blair, 2003)",
"ref_id": "BIBREF32"
},
{
"start": 339,
"end": 359,
"text": "(Weiss et al., 2018;",
"ref_id": "BIBREF183"
},
{
"start": 360,
"end": 375,
"text": "Bernardy, 2018;",
"ref_id": "BIBREF19"
},
{
"start": 376,
"end": 396,
"text": "Suzgun et al., 2019)",
"ref_id": "BIBREF170"
},
{
"start": 437,
"end": 457,
"text": "(Chen et al., 2018b)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other Methods",
"sec_num": "7"
},
{
"text": "Analyzing neural networks has become a hot topic in NLP research. This survey attempted to review and summarize as much of the current research as possible, while organizing it along several prominent themes. We have emphasized aspects in analysis that are specific to language-namely, what linguistic information is captured in neural networks, which phenomena they are successful at capturing, and where they fail. Many of the analysis methods are general techniques from the larger machine learning community, such as visualization via saliency measures or evaluation by adversarial examples. But even those sometimes require non-trivial adaptations to work with text input. Some methods are more specific to the field, but may prove useful in other domains. Challenge sets or test suites are such a case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Throughout this survey, we have identified several limitations or gaps in current analysis work:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "\u2022 The use of auxiliary classification tasks for identifying which linguistic properties neural networks capture has become standard practice (Section 2), while lacking both a theoretical foundation and a better empirical consideration of the link between the auxiliary tasks and the original task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "\u2022 Evaluation of analysis work is often limited or qualitative, especially in visualization techniques (Section 3). Newer forms of evaluation are needed for determining the success of different methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "\u2022 Relatively little work has been done on explaining predictions of neural network models, apart from providing visualizations (Section 6). With the increasing public demand for explaining algorithmic choices in machine learning systems (Doshi-Velez and Kim, 2017; , there is pressing need for progress in this direction.",
"cite_spans": [
{
"start": 237,
"end": 264,
"text": "(Doshi-Velez and Kim, 2017;",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "\u2022 Much of the analysis work is focused on the English language, especially in constructing challenge sets for various tasks (Section 4), with the exception of MT due to its inherent multilingual character. Developing resources and evaluating methods on other languages is important as the field grows and matures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "\u2022 More challenge sets for evaluating other tasks besides NLI and MT are needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Finally, as with any survey in a rapidly evolving field, this paper is likely to omit relevant recent work by the time of publication. While we intend to continue updating the online appendix with newer publications, we hope that our summarization of prominent analysis work and its categorization into several themes will be a useful guide for scholars interested in analyzing and understanding neural networks for NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "For instance, a neural network that learns distributed representations of words was developed already inMiikkulainen and Dyer (1991). SeeGoodfellow et al. (2016, chapter 12.4) for references to other important milestones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A similar method has been used to analyze hierarchical structure in neural networks trained on arithmetic expressions(Veldhoen et al., 2016;.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Others found that even simple binary trees may work well in MT(Wang et al., 2018b) and sentence classification(Chen et al., 2015).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Generally, many of the visualization methods are adapted from the vision domain, where they have been extremely popular; seeZhang and Zhu (2018) for a survey.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "RNNVis(Ming et al., 2017) is a similar tool, but its online demo does not seem to be available at the time of writing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One could speculate that their decrease in popularity can be attributed to the rise of large-scale quantitative evaluation of statistical NLP systems.10 Another typology of evaluation protocols was put forth byBurlot and Yvon (2017). Their criteria are partially overlapping with ours, although they did not provide a comprehensive categorization like the one compiled here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Their dataset does not seem to be available yet, but more details are promised to appear in a future publication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "also verified that their examples do not contain annotation artifacts, a potential problem noted in recent studies(Gururangan et al., 2018; Poliak et al., 2018b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The notation here followsYuan et al. (2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These criteria are partly taken fromYuan et al. (2017), where a more elaborate taxonomy is laid out. At present, though, the work on adversarial examples in NLP is more limited than in computer vision, so our criteria will suffice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Other work considered learning textual-visual explanations from multimodal annotations(Park et al., 2018).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The methodology follows earlier work on evaluating the interpretability of probabilistic topic models with intrusion tasks(Chang et al., 2009).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers and the action editor for their very helpful comments. This work was supported by the Qatar Computing Research Institute. Y.B. is also supported by the Harvard Mind, Brain, Behavior Initiative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Analysis of sentence embedding models using prediction tasks in natural language processing",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "IBM Journal of Research and Development",
"volume": "61",
"issue": "4",
"pages": "3--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017a. Anal- ysis of sentence embedding models using prediction tasks in natural language processing. IBM Journal of Research and Development, 61(4):3-9.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Fine-Grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine- Grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. In Interna- tional Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Morphological Inflection Generation with Hard Monotonic Attention",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2004--2015",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Morphological Inflection Generation with Hard Monotonic Attention. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2004-2015. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multi-task Learning for Universal Sentence Embeddings: A Thorough Evaluation using Transfer and Auxiliary Tasks",
"authors": [
{
"first": "Xueying",
"middle": [],
"last": "Wasi Uddin Ahmad",
"suffix": ""
},
{
"first": "Zhechao",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07911v2"
]
},
"num": null,
"urls": [],
"raw_text": "Wasi Uddin Ahmad, Xueying Bai, Zhechao Huang, Chao Jiang, Nanyun Peng, and Kai-Wei Chang. 2018. Multi-task Learning for Universal Sentence Embeddings: A Thorough Evaluation using Transfer and Auxiliary Tasks. arXiv preprint arXiv:1804.07911v2.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Encoding of phonology in a recurrent neural model of grounded speech",
"authors": [
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Barking",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "368--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Afra Alishahi, Marie Barking, and Grzegorz Chrupa\u0142a. 2017. Encoding of phonology in a recurrent neural model of grounded speech. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 368-378. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A causal framework for explaining the predictions of black-box sequence-to-sequence models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Alvarez-Melis",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "412--421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Alvarez-Melis and Tommi Jaakkola. 2017. A causal framework for explaining the predictions of black-box sequence-to-sequence models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 412-421. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Generating Natural Language Adversarial Examples",
"authors": [
{
"first": "Moustafa",
"middle": [],
"last": "Alzantot",
"suffix": ""
},
{
"first": "Yash",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Bo-Jhang",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Mani",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2890--2896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating Natural Language Adversarial Examples. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890-2896. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "What is relevant in a text document?'': An interpretable machine learning approach",
"authors": [
{
"first": "Leila",
"middle": [],
"last": "Arras",
"suffix": ""
},
{
"first": "Franziska",
"middle": [],
"last": "Horn",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
}
],
"year": 2017,
"venue": "PLOS ONE",
"volume": "12",
"issue": "8",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leila Arras, Franziska Horn, Gr\u00e9goire Montavon, Klaus-Robert M\u00fcller, and Wojciech Samek. 2017a. ''What is relevant in a text document?'': An interpretable machine learning approach. PLOS ONE, 12(8):1-23.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Explaining Recurrent Neural Network Predictions in Sentiment Analysis",
"authors": [
{
"first": "Leila",
"middle": [],
"last": "Arras",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "159--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leila Arras, Gr\u00e9goire Montavon, Klaus-Robert M\u00fcller, and Wojciech Samek. 2017b. Explain- ing Recurrent Neural Network Predictions in Sentiment Analysis. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 159-168. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Uncovering Divergent Linguistic Information in Word Embeddings with Lessons for Intrinsic and Extrinsic Evaluation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "282--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Inigo Lopez- Gazpio, and Eneko Agirre. 2018. Uncovering Divergent Linguistic Information in Word Embeddings with Lessons for Intrinsic and Extrinsic Evaluation. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 282-291. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Interpreting Neural Networks to Improve Politeness Comprehension",
"authors": [
{
"first": "Malika",
"middle": [],
"last": "Aubakirova",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2035--2041",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malika Aubakirova and Mohit Bansal. 2016. Inter- preting Neural Networks to Improve Politeness Comprehension. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2035-2041. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural Machine Translation by Jointly Learning to Align and Translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473v7"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473v7.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Identifying and Controlling Important Neurons in Neural Machine Translation",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Bau",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.01157v1"
]
},
"num": null,
"urls": [],
"raw_text": "Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2018. Identifying and Controlling Important Neurons in Neural Machine Translation. arXiv preprint arXiv:1811.01157v1.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Evaluating Discourse Phenomena in Neural Machine Translation",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Bawden",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1304--1313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating Discourse Phenomena in Neural Machine Translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 1304-1313. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "On Internal Language Representations in Deep Learning: An Analysis of Machine Translation and Speech Recognition",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov. 2018. On Internal Language Representations in Deep Learning: An Analy- sis of Machine Translation and Speech Recog- nition. Ph.D. thesis, Massachusetts Institute of Technology.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Synthetic and Natural Noise Both Break Neural Machine Translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Syn- thetic and Natural Noise Both Break Neural Machine Translation. In International Confer- ence on Learning Representations (ICLR).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "What do Neural Machine Translation Models Learn about Morphology?",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "861--872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017a. What do Neural Machine Translation Models Learn about Morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861-872. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": ";",
"middle": [
"I"
],
"last": "Guyon",
"suffix": ""
},
{
"first": "U",
"middle": [
"V"
],
"last": "Luxburg",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30",
"volume": "",
"issue": "",
"pages": "2441--2451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and James Glass. 2017, Anal- yzing Hidden Representations in End-to-End Automatic Speech Recognition Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Ad- vances in Neural Information Processing Sys- tems 30, pages 2441-2451. Curran Associates, Inc.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Llu\u00eds M\u00e0rquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017b. Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1-10. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Can Recurrent Neural Networks Learn Nested Recursion? LiLT (Linguistic Issues in Language Technology)",
"authors": [
{
"first": "Jean-Philippe",
"middle": [],
"last": "Bernardy",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Philippe Bernardy. 2018. Can Recurrent Neural Networks Learn Nested Recursion? LiLT (Linguistic Issues in Language Tech- nology), 16(1).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Lazy Encoder: A Fine-Grained Analysis of the Role of Morphology in Neural Machine Translation",
"authors": [
{
"first": "Arianna",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Tump",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2871--2876",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arianna Bisazza and Clara Tump. 2018. The Lazy Encoder: A Fine-Grained Analysis of the Role of Morphology in Neural Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2871-2876. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Deep RNNs Encode Soft Hierarchical Syntax",
"authors": [
{
"first": "Terra",
"middle": [],
"last": "Blevins",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "14--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terra Blevins, Omer Levy, and Luke Zettlemoyer. 2018. Deep RNNs Encode Soft Hierarchi- cal Syntax. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 14-19. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "On learning context-free and context-sensitive languages",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Bod\u00e9n",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Wiles",
"suffix": ""
}
],
"year": 2002,
"venue": "IEEE Transactions on Neural Networks",
"volume": "13",
"issue": "2",
"pages": "491--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikael Bod\u00e9n and Janet Wiles. 2002. On learning context-free and context-sensitive languages. IEEE Transactions on Neural Networks, 13(2): 491-493.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distributional Semantics in Technicolor",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Nam Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "136--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam Khanh Tran. 2012. Distributional Semantics in Technicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 136-145. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Natural Language Multitasking: Analyzing and Improving Syntactic Saliency of Hidden Representations",
"authors": [
{
"first": "Gino",
"middle": [],
"last": "Brunner",
"suffix": ""
},
{
"first": "Yuyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wattenhofer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Weigelt",
"suffix": ""
}
],
"year": 2017,
"venue": "The 31st Annual Conference on Neural Information Processing (NIPS)-Workshop on Learning Disentangled Features: From Perception to Control",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gino Brunner, Yuyi Wang, Roger Wattenhofer, and Michael Weigelt. 2017. Natural Language Multitasking: Analyzing and Improving Syn- tactic Saliency of Hidden Representations. The 31st Annual Conference on Neural Information Processing (NIPS)-Workshop on Learning Disentangled Features: From Perception to Control.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A Linguistic Evaluation of Rule-Based, Phrase-Based, and Neural MT Engines",
"authors": [
{
"first": "Aljoscha",
"middle": [],
"last": "Burchardt",
"suffix": ""
},
{
"first": "Vivien",
"middle": [],
"last": "Macketanz",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Dehdari",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Heigold",
"suffix": ""
},
{
"first": "Jan-Thorsten",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2017,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "108",
"issue": "1",
"pages": "159--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aljoscha Burchardt, Vivien Macketanz, Jon Dehdari, Georg Heigold, Jan-Thorsten Peter, and Philip Williams. 2017. A Linguistic Evaluation of Rule-Based, Phrase-Based, and Neural MT Engines. The Prague Bulletin of Mathematical Linguistics, 108(1):159-170.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Evaluating the morphological competence of",
"authors": [
{
"first": "Franck",
"middle": [],
"last": "Burlot",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franck Burlot and Fran\u00e7ois Yvon. 2017. Evaluating the morphological competence of",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Machine Translation Systems",
"authors": [],
"year": null,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "43--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Machine Translation Systems. In Proceedings of the Second Conference on Machine Trans- lation, pages 43-55. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Casey",
"suffix": ""
}
],
"year": 1996,
"venue": "Neural Computation",
"volume": "8",
"issue": "6",
"pages": "1135--1178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Casey. 1996. The Dynamics of Discrete- Time Computation, with Application to Re- current Neural Networks and Finite State Machine Extraction. Neural Computation, 8(6):1135-1178.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic Textual Sim- ilarity Multilingual and Crosslingual Focused Evaluation. In Proceedings of the 11th Inter- national Workshop on Semantic Evaluation (SemEval-2017), pages 1-14. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning weakly supervised multimodal phoneme embeddings",
"authors": [
{
"first": "Rahma",
"middle": [],
"last": "Chaabouni",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Dunbar",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Zeghidour",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rahma Chaabouni, Ewan Dunbar, Neil Zeghidour, and Emmanuel Dupoux. 2017. Learning weakly supervised multimodal phoneme embeddings. In Interspeech 2017.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Incremental Training of First Order Recurrent Neural Networks to Predict a Context-Sensitive Language",
"authors": [
{
"first": "Stephan",
"middle": [
"K"
],
"last": "Chalup",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"D"
],
"last": "Blair",
"suffix": ""
}
],
"year": 2003,
"venue": "Neural Networks",
"volume": "16",
"issue": "7",
"pages": "955--972",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan K. Chalup and Alan D. Blair. 2003. Incremental Training of First Order Recurrent Neural Networks to Predict a Context-Sensitive Language. Neural Networks, 16(7):955-972.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Reading Tea Leaves: How Humans Interpret Topic Models",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Gerrish",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jordan",
"middle": [
"L"
],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems",
"volume": "22",
"issue": "",
"pages": "288--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L. Boyd-graber, and David M. Blei. 2009, Reading Tea Leaves: How Humans Inter- pret Topic Models, Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Pro- cessing Systems 22, pages 288-296, Curran Associates, Inc..",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Attacking visual language grounding with adversarial examples: A case study on neural image captioning",
"authors": [
{
"first": "Hongge",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Pin-Yu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Hsieh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2587--2597",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh. 2018a. Attacking visual language grounding with adversarial examples: A case study on neural image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2587-2597. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sentence Modeling with Gated Recursive Neural Network",
"authors": [
{
"first": "Xinchi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Chenxi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "793--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Shiyu Wu, and Xuanjing Huang. 2015. Sentence Modeling with Gated Recursive Neural Network. In Proc- eedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 793-798. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Recurrent Neural Networks as Weighted Language Recognizers",
"authors": [
{
"first": "Yining",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sorcha",
"middle": [],
"last": "Gilroy",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Maletti",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2261--2271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yining Chen, Sorcha Gilroy, Andreas Maletti, Jonathan May, and Kevin Knight. 2018b. Recurrent Neural Networks as Weighted Language Recognizers. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2261-2271. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples",
"authors": [
{
"first": "Minhao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Pin-Yu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Hsieh",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.01128v1"
]
},
"num": null,
"urls": [],
"raw_text": "Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018. Seq2Sick: Evaluating the Robustness of Sequence-to- Sequence Models with Adversarial Examples. arXiv preprint arXiv:1803.01128v1.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Representations of language in a model of visually grounded speech signal",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Lieke",
"middle": [],
"last": "Gelderloos",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "613--622",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupa\u0142a, Lieke Gelderloos, and Afra Alishahi. 2017. Representations of language in a model of visually grounded speech signal. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 613-622. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Are BLEU and Meaning Representation in Opposition?",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "C\u00edfka",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1362--1371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej C\u00edfka and Ond\u0159ej Bojar. 2018. Are BLEU and Meaning Representation in Opposition? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1362-1371. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2126--2136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Germ\u00e1n Kruszewski, Guillaume Lample, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Using the framework",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "Dick",
"middle": [],
"last": "Crouch",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Van Eijck",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Fox",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Jaspars",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Kamp",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Milward",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Pulman",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Maier",
"suffix": ""
},
{
"first": "Karsten",
"middle": [],
"last": "Konrad",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Cooper, Dick Crouch, Jan van Eijck, Chris Fox, Josef van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, Steve Pulman, Ted Briscoe, Holger Maier, and Karsten Konrad. 1996, Using the framework. Technical report, The FraCaS Consortium.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models",
"authors": [
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "D",
"middle": [
"Anthony"
],
"last": "Bau",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, D. Anthony Bau, and James Glass. 2019a, January. What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder",
"authors": [
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "142--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, and Stephan Vogel. 2017. Understanding and Improving Morphological Learning in the Neural Machine Transla- tion Decoder. In Proceedings of the Eighth International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 142-151. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "NeuroX: A Toolkit for Analyzing Individual Neurons in Neural Networks",
"authors": [
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Avery",
"middle": [],
"last": "Nortonsmith",
"suffix": ""
},
{
"first": "D",
"middle": [
"Anthony"
],
"last": "Bau",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI): Demonstrations Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fahim Dalvi, Avery Nortonsmith, D. Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, and James Glass. 2019b, January. NeuroX: A Toolkit for Analyzing Individual Neurons in Neural Networks. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI): Demonstrations Track.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Learning Context-Free Grammars: Capabilities and Limitations of a Recurrent Neural Network with an External Stack Memory",
"authors": [
{
"first": "C",
"middle": [
"Lee"
],
"last": "Sreerupa Das",
"suffix": ""
},
{
"first": "Guo-Zheng",
"middle": [],
"last": "Giles",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of The Fourteenth Annual Conference of Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sreerupa Das, C. Lee Giles, and Guo-Zheng Sun. 1992. Learning Context-Free Grammars: Capabilities and Limitations of a Recurrent Neural Network with an External Stack Memory. In Proceedings of The Fourteenth Annual Conference of Cognitive Science Society. Indiana University, page 14.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Evaluating Compositionality in Sentence Embeddings",
"authors": [
{
"first": "Ishita",
"middle": [],
"last": "Dasgupta",
"suffix": ""
},
{
"first": "Demi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stuhlm\u00fcller",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"J"
],
"last": "Gershman",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"D"
],
"last": "Goodman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.04302v2"
]
},
"num": null,
"urls": [],
"raw_text": "Ishita Dasgupta, Demi Guo, Andreas Stuhlm\u00fcller, Samuel J. Gershman, and Noah D. Goodman. 2018. Evaluating Compositionality in Sen- tence Embeddings. arXiv preprint arXiv:1802. 04302v2.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "The Emergence of Semantics in Neural Network Representations of Visual Information",
"authors": [
{
"first": "Dhanush",
"middle": [],
"last": "Dharmaretnam",
"suffix": ""
},
{
"first": "Alona",
"middle": [],
"last": "Fyshe",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "776--780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dhanush Dharmaretnam and Alona Fyshe. 2018. The Emergence of Semantics in Neural Network Representations of Visual Information. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 776-780. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Visualizing and Understanding Neural Machine Translation",
"authors": [
{
"first": "Yanzhuo",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1150--1159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and Understanding Neural Machine Translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1150-1159. Association for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Towards a Rigorous Science of Interpretable Machine Learning",
"authors": [
{
"first": "Finale",
"middle": [],
"last": "Doshi",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Velez",
"suffix": ""
},
{
"first": "Been",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.08608v2"
]
},
"num": null,
"urls": [],
"raw_text": "Finale Doshi-Velez and Been Kim. 2017. Towards a Rigorous Science of Interpretable Machine Learning. In arXiv preprint arXiv: 1702.08608v2.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Accountability of AI Under the Law: The Role of Explanation",
"authors": [
{
"first": "Finale",
"middle": [],
"last": "Doshi-Velez",
"suffix": ""
},
{
"first": "Mason",
"middle": [],
"last": "Kortz",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Budish",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Bavitz",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gershman",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Brien",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Waldo",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wood",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O'Brien, Stuart Shieber, James Waldo, David Weinberger, and Alexandra Wood. 2017. Accountability of AI Under the Law: The Role of Explanation. Privacy Law Scholars Conference.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Analysis of Audio-Visual Features for Unsupervised Speech Recognition",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Drexler",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "International Workshop on Grounding Language Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Drexler and James Glass. 2017. Analysis of Audio-Visual Features for Unsupervised Speech Recognition. In International Workshop on Grounding Language Understanding.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "On Adversarial Examples for Character-Level Neural Machine Translation",
"authors": [
{
"first": "Javid",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "653--663",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018a. On Adversarial Examples for Character-Level Neural Machine Translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 653-663. Association for Computational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "HotFlip: White-Box Adversarial Examples for Text Classification",
"authors": [
{
"first": "Javid",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Anyi",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "31--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018b. HotFlip: White-Box Adversarial Examples for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31-36. Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "A Challenge Set and Methods for Noun-Verb Ambiguity",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Elkahky",
"suffix": ""
},
{
"first": "Kellie",
"middle": [],
"last": "Webster",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2562--2572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Elkahky, Kellie Webster, Daniel Andor, and Emily Pitler. 2018. A Challenge Set and Methods for Noun-Verb Ambiguity. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2562-2572. Association for Computational Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Analyzing Learned Representations of a Deep ASR Performance Prediction Model",
"authors": [
{
"first": "Zied",
"middle": [],
"last": "Elloumi",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Galibert",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Lecouteux",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "9--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zied Elloumi, Laurent Besacier, Olivier Galibert, and Benjamin Lecouteux. 2018. Analyzing Learned Representations of a Deep ASR Performance Prediction Model. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 9-15. Association for Computational Linguistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Representation and Structure in Connectionist Models",
"authors": [
{
"first": "Jeffrey",
"middle": [
"L"
],
"last": "Elman",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L. Elman. 1989. Representation and Structure in Connectionist Models, University of California, San Diego, Center for Research in Language.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Finding Structure in Time",
"authors": [
{
"first": "Jeffrey",
"middle": [
"L"
],
"last": "Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive Science",
"volume": "14",
"issue": "2",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L. Elman. 1990. Finding Structure in Time. Cognitive Science, 14(2):179-211.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Distributed representations, simple recurrent networks, and grammatical structure",
"authors": [
{
"first": "Jeffrey",
"middle": [
"L"
],
"last": "Elman",
"suffix": ""
}
],
"year": 1991,
"venue": "Machine Learning",
"volume": "7",
"issue": "",
"pages": "195--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L. Elman. 1991. Distributed represen- tations, simple recurrent networks, and gram- matical structure. Machine Learning, 7(2-3): 195-225.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Probing for semantic evidence of composition by means of simple classification tasks",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "134--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134-139. Association for Computational Linguistics.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Problems With Evaluation of Word Embeddings Using Word Similarity Tasks",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems With Evaluation of Word Embeddings Using Word Similarity Tasks. In Proceedings of the 1st Workshop on Evaluating Vector Space Representations for NLP.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Sparse Overcomplete Word Vector Representations",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1491--1500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015. Sparse Overcomplete Word Vector Representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1491-1500. Association for Computational Linguistics.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Pathologies of Neural Models Make Interpretations Difficult",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Shi Feng",
"suffix": ""
},
{
"first": "Alvin",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "I",
"middle": [
"I"
],
"last": "Grissom",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3719--3728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of Neural Models Make Interpretations Difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3719-3728. Association for Computational Linguistics.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Placing Search in Context: The Concept Revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "1",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing Search in Context: The Concept Revisited. ACM Transactions on Information Systems, 20(1):116-131.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "The Acquisition of Anaphora by Simple Recurrent Networks",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Mathis",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Badecker",
"suffix": ""
}
],
"year": 2013,
"venue": "Language Acquisition",
"volume": "20",
"issue": "3",
"pages": "181--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Frank, Donald Mathis, and William Badecker. 2013. The Acquisition of Anaphora by Simple Recurrent Networks. Language Acquisition, 20(3):181-227.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Paying Attention to Attention: Highlighting Influential Samples in Sequential Analysis",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Freeman",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Merriman",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Aggarwal",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Beaver",
"suffix": ""
},
{
"first": "Abdullah",
"middle": [],
"last": "Mueen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.02113v1"
]
},
"num": null,
"urls": [],
"raw_text": "Cynthia Freeman, Jonathan Merriman, Abhinav Aggarwal, Ian Beaver, and Abdullah Mueen. 2018. Paying Attention to Attention: High- lighting Influential Samples in Sequential Analysis. arXiv preprint arXiv:1808.02113v1.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "A Compositional and Interpretable Semantic Space",
"authors": [
{
"first": "Alona",
"middle": [],
"last": "Fyshe",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Wehbe",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"P"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "32--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alona Fyshe, Leila Wehbe, Partha P. Talukdar, Brian Murphy, and Tom M. Mitchell. 2015. A Compositional and Interpretable Semantic Space. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 32-41. Association for Computational Linguistics.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "What's Going On in Neural Constituency Parsers? An Analysis",
"authors": [
{
"first": "David",
"middle": [],
"last": "Gaddy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "999--1010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Gaddy, Mitchell Stern, and Dan Klein. 2018. What's Going On in Neural Constituency Parsers? An Analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 999-1010. Association for Computational Linguistics.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Interpretation of Semantic Tweet Representations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17",
"volume": "",
"issue": "",
"pages": "95--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Ganesh, Manish Gupta, and Vasudeva Varma. 2017. Interpretation of Semantic Tweet Representations. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17, pages 95-102, New York, NY, USA. ACM.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Lanchantin",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Lou"
],
"last": "Soffa",
"suffix": ""
},
{
"first": "Yanjun",
"middle": [],
"last": "Qi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.04354v5"
]
},
"num": null,
"urls": [],
"raw_text": "Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers. arXiv preprint arXiv: 1801.04354v5.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "From phonemes to images: Levels of representation in a recurrent neural model of visuallygrounded language learning",
"authors": [
{
"first": "Lieke",
"middle": [],
"last": "Gelderloos",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1309--1319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lieke Gelderloos and Grzegorz Chrupa\u0142a. 2016. From phonemes to images: Levels of repre- sentation in a recurrent neural model of visually- grounded language learning. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1309-1319, Osaka, Japan, The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "LSTM Recurrent Networks Learn Simple Context-Free and Context-Sensitive Languages",
"authors": [
{
"first": "A",
"middle": [],
"last": "Felix",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Gers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2001,
"venue": "IEEE Transactions on Neural Networks",
"volume": "12",
"issue": "6",
"pages": "1333--1340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix A. Gers and J\u00fcrgen Schmidhuber. 2001. LSTM Recurrent Networks Learn Simple Context-Free and Context-Sensitive Languages. IEEE Transactions on Neural Networks, 12(6): 1333-1340.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity",
"authors": [
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2173--2182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniela Gerz, Ivan Vuli\u0107, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2173-2182. Association for Computational Linguistics.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "What does Attention in Neural Machine Translation Pay Attention to?",
"authors": [
{
"first": "Hamidreza",
"middle": [],
"last": "Ghader",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "30--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamidreza Ghader and Christof Monz. 2017. What does Attention in Neural Machine Translation Pay Attention to? In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 30-39. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Interpreting Recurrent and Attention-Based Neural Models: A Case Study on Natural Language Inference",
"authors": [
{
"first": "Reza",
"middle": [],
"last": "Ghaeini",
"suffix": ""
},
{
"first": "Xiaoli",
"middle": [],
"last": "Fern",
"suffix": ""
},
{
"first": "Prasad",
"middle": [],
"last": "Tadepalli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4952--4957",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reza Ghaeini, Xiaoli Fern, and Prasad Tadepalli. 2018. Interpreting Recurrent and Attention- Based Neural Models: A Case Study on Nat- ural Language Inference. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 4952-4957. Association for Computational Linguistics.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Under the Hood: Using Diagnostic Classifiers to Investigate and Improve How Language Models Track Agreement Information",
"authors": [
{
"first": "Mario",
"middle": [],
"last": "Giulianelli",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Harding",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Mohnert",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "240--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the Hood: Using Diagnostic Classifiers to Investigate and Improve How Language Models Track Agreement Information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240-248. Association for Computational Linguistics.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "Breaking NLI Systems with Sentences that Require Simple Lexical Inferences",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Glockner",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "650--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI Systems with Sentences that Require Simple Lexical Infer- ences. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 650-655. Association for Computational Linguistics.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?",
"authors": [
{
"first": "Fr\u00e9deric",
"middle": [],
"last": "Godin",
"suffix": ""
},
{
"first": "Kris",
"middle": [],
"last": "Demuynck",
"suffix": ""
},
{
"first": "Joni",
"middle": [],
"last": "Dambre",
"suffix": ""
},
{
"first": "Wesley",
"middle": [],
"last": "De Neve",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3275--3284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fr\u00e9deric Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, and Thomas Demeester. 2018. Explaining Character-Aware Neural Net- works for Word-Level Prediction: Do They Dis- cover Linguistic Rules? In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 3275-3284. Association for Computational Linguistics.",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "Neural Network methods for Natural Language Processing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2017. Neural Network methods for Natural Language Processing, volume 10 of Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "Deep Learning",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning, MIT Press. http://www.deepleaningbook.org.",
"links": null
},
"BIBREF80": {
"ref_id": "b80",
"title": "Generative Adversarial Nets",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2672--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems, pages 2672-2680.",
"links": null
},
"BIBREF81": {
"ref_id": "b81",
"title": "Explaining and Harnessing Adversarial Examples",
"authors": [
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In International Con- ference on Learning Representations (ICLR).",
"links": null
},
"BIBREF82": {
"ref_id": "b82",
"title": "Colorless Green Recurrent Networks Dream Hierarchically",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1195--1205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless Green Recurrent Networks Dream Hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205. Association for Computational Linguistics.",
"links": null
},
"BIBREF83": {
"ref_id": "b83",
"title": "Distributional vectors encode referential attributes",
"authors": [
{
"first": "Abhijeet",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "12--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijeet Gupta, Gemma Boleda, Marco Baroni, and Sebastian Pad\u00f3. 2015. Distributional vectors encode referential attributes. In Pro- ceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 12-21. Association for Computational Linguistics.",
"links": null
},
"BIBREF84": {
"ref_id": "b84",
"title": "LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation",
"authors": [
{
"first": "Pankaj",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "154--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pankaj Gupta and Hinrich Sch\u00fctze. 2018. LISA: Explaining Recurrent Neural Network Judg- ments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation. In Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 154-164. Association for Computational Linguistics.",
"links": null
},
"BIBREF85": {
"ref_id": "b85",
"title": "Annotation Artifacts in Natural Language Inference Data",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation Artifacts in Natural Language Inference Data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), 65",
"links": null
},
"BIBREF86": {
"ref_id": "b86",
"title": "Connectionism and Cognitive Linguistics",
"authors": [
{
"first": "Catherine",
"middle": [
"L"
],
"last": "Harris",
"suffix": ""
}
],
"year": 1990,
"venue": "Connection Science",
"volume": "2",
"issue": "1-2",
"pages": "7--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine L. Harris. 1990. Connectionism and Cognitive Linguistics. Connection Science, 2(1-2):7-33.",
"links": null
},
"BIBREF87": {
"ref_id": "b87",
"title": "Learning Word-Like Units from Joint Audio-Visual Analysis",
"authors": [
{
"first": "David",
"middle": [],
"last": "Harwath",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "506--517",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Harwath and James Glass. 2017. Learning Word-Like Units from Joint Audio-Visual Ana- lysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 506-517. Association for Computational Linguistics.",
"links": null
},
"BIBREF88": {
"ref_id": "b88",
"title": "How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse?",
"authors": [
{
"first": "Georg",
"middle": [],
"last": "Heigold",
"suffix": ""
},
{
"first": "G\u00fcnter",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 13th Conference of The Association for Machine Translation in the Americas",
"volume": "1",
"issue": "",
"pages": "68--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georg Heigold, G\u00fcnter Neumann, and Josef van Genabith. 2018. How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse? In Proceedings of the 13th Conference of The Association for Machine Translation in the Americas (Volume 1: Research Track), pages 68-79.",
"links": null
},
"BIBREF89": {
"ref_id": "b89",
"title": "SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "4",
"pages": "665--695",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation. Computational Linguistics, 41(4):665-695.",
"links": null
},
"BIBREF90": {
"ref_id": "b90",
"title": "Visualisation and ''diagnostic classifiers'' reveal how recurrent and recursive neural networks process hierarchical structure",
"authors": [
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Veldhoen",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "61",
"issue": "",
"pages": "907--926",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and ''diagnostic classifiers'' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.",
"links": null
},
"BIBREF91": {
"ref_id": "b91",
"title": "A Challenge Set Approach to Evaluating Machine Translation",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Isabelle",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2486--2496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Isabelle, Colin Cherry, and George Foster. 2017. A Challenge Set Approach to Eval- uating Machine Translation. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2486-2496. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF92": {
"ref_id": "b92",
"title": "A Challenge Set for French-> English Machine Translation",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Isabelle",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.02725v2"
]
},
"num": null,
"urls": [],
"raw_text": "Pierre Isabelle and Roland Kuhn. 2018. A Chal- lenge Set for French-> English Machine Trans- lation. arXiv preprint arXiv:1806.02725v2.",
"links": null
},
"BIBREF93": {
"ref_id": "b93",
"title": "JEIDA's test-sets for quality evaluation of MT systems-technical evaluation from the developer's point of view",
"authors": [
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of MT Summit V",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hitoshi Isahara. 1995. JEIDA's test-sets for quality evaluation of MT systems-technical evaluation from the developer's point of view. In Proceedings of MT Summit V.",
"links": null
},
"BIBREF94": {
"ref_id": "b94",
"title": "Adversarial Example Generation with Syntactically Controlled Paraphrase Networks",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1875--1885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial Exam- ple Generation with Syntactically Controlled Paraphrase Networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885. Association for Computational Linguistics.",
"links": null
},
"BIBREF95": {
"ref_id": "b95",
"title": "Understanding Convolutional Neural Networks for Text Classification",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Jacovi",
"suffix": ""
},
{
"first": "Oren",
"middle": [
"Sar"
],
"last": "Shalom",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "56--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Jacovi, Oren Sar Shalom, and Yoav Goldberg. 2018. Understanding Convolutional Neural Networks for Text Classification. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 56-65. Association for Computational Linguistics.",
"links": null
},
"BIBREF96": {
"ref_id": "b96",
"title": "A Shared Attention Mechanism for Interpretation of Neural Automatic Post-Editing Systems",
"authors": [
{
"first": "Inigo",
"middle": [],
"last": "Jauregi Unanue",
"suffix": ""
},
{
"first": "Ehsan",
"middle": [
"Zare"
],
"last": "Borzeshi",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Piccardi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "11--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inigo Jauregi Unanue, Ehsan Zare Borzeshi, and Massimo Piccardi. 2018. A Shared Atten- tion Mechanism for Interpretation of Neural Automatic Post-Editing Systems. In Proceed- ings of the 2nd Workshop on Neural Machine Translation and Generation, pages 11-17. Association for Computational Linguistics.",
"links": null
},
"BIBREF97": {
"ref_id": "b97",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2021--2031",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031. Association for Computational Linguistics.",
"links": null
},
"BIBREF98": {
"ref_id": "b98",
"title": "Exploring the Limits of Language Modeling",
"authors": [
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.02410v2"
]
},
"num": null,
"urls": [],
"raw_text": "Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv preprint arXiv:1602.02410v2.",
"links": null
},
"BIBREF99": {
"ref_id": "b99",
"title": "Representation of Linguistic Form and Function in Recurrent Neural Networks",
"authors": [
{
"first": "Akos",
"middle": [],
"last": "K\u00e1d\u00e1r",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "4",
"pages": "761--780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akos K\u00e1d\u00e1r, Grzegorz Chrupa\u0142a, and Afra Alishahi. 2017. Representation of Lin- guistic Form and Function in Recurrent Neural Networks. Computational Linguistics, 43(4):761-780.",
"links": null
},
"BIBREF100": {
"ref_id": "b100",
"title": "Visualizing and Understanding Recurrent Networks",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Fei-Fei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.02078v2"
]
},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and Understanding Recurrent Networks. arXiv preprint arXiv:1506.02078v2.",
"links": null
},
"BIBREF101": {
"ref_id": "b101",
"title": "Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context",
"authors": [
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "284--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 284-294. Association for Computational Linguistics.",
"links": null
},
"BIBREF102": {
"ref_id": "b102",
"title": "Using Test Suites in Evaluation of Machine 66",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Kirsten",
"middle": [],
"last": "Falkedal",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margaret King and Kirsten Falkedal. 1990. Using Test Suites in Evaluation of Machine 66",
"links": null
},
"BIBREF103": {
"ref_id": "b103",
"title": "Translation Systems",
"authors": [],
"year": 1990,
"venue": "Papers Presented to the 13th International Conference on Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Translation Systems. In COLNG 1990 Volume 2: Papers Presented to the 13th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF104": {
"ref_id": "b104",
"title": "Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Represen- tations. Transactions of the Association for Computational Linguistics, 4:313-327.",
"links": null
},
"BIBREF105": {
"ref_id": "b105",
"title": "A test suite for evaluation of English-to-Korean machine translation systems",
"authors": [
{
"first": "Sungryong",
"middle": [],
"last": "Koh",
"suffix": ""
},
{
"first": "Jinee",
"middle": [],
"last": "Maeng",
"suffix": ""
},
{
"first": "Ji-Young",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Young-Sook",
"middle": [],
"last": "Chae",
"suffix": ""
},
{
"first": "Key-Sun",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2001,
"venue": "MT Summit Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungryong Koh, Jinee Maeng, Ji-Young Lee, Young-Sook Chae, and Key-Sun Choi. 2001. A test suite for evaluation of English-to-Korean machine translation systems. In MT Summit Conference.",
"links": null
},
"BIBREF106": {
"ref_id": "b106",
"title": "What's in an Embedding? Analyzing Word Embeddings through Multilingual Evaluation",
"authors": [
{
"first": "Arne",
"middle": [],
"last": "K\u00f6hn",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2067--2073",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arne K\u00f6hn. 2015. What's in an Embedding? Analyzing Word Embeddings through Multi- lingual Evaluation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 2067-2073, Lisbon, Portugal. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF107": {
"ref_id": "b107",
"title": "Adversarial Examples for Natural Language Classification Problems",
"authors": [
{
"first": "Volodymyr",
"middle": [],
"last": "Kuleshov",
"suffix": ""
},
{
"first": "Shantanu",
"middle": [],
"last": "Thakoor",
"suffix": ""
},
{
"first": "Tingfung",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Ermon",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial Examples for Natural Language Classification Problems.",
"links": null
},
"BIBREF108": {
"ref_id": "b108",
"title": "Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks",
"authors": [
{
"first": "Brenden",
"middle": [],
"last": "Lake",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "2873--2882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brenden Lake and Marco Baroni. 2018. Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Ma- chine Learning Research, pages 2873-2882, Stockholmsm\u00e4ssan, Stockholm, Sweden. PMLR.",
"links": null
},
"BIBREF109": {
"ref_id": "b109",
"title": "TSNLP-Test Suites for Natural Language Processing",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Sylvie",
"middle": [],
"last": "Regnier-Prost",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Netter",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Lux",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Kirsten",
"middle": [],
"last": "Falkedal",
"suffix": ""
},
{
"first": "Frederik",
"middle": [],
"last": "Fouvry",
"suffix": ""
},
{
"first": "Dominique",
"middle": [],
"last": "Estival",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Dauphin",
"suffix": ""
},
{
"first": "Herve",
"middle": [],
"last": "Compagnion",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Baur",
"suffix": ""
},
{
"first": "Lorna",
"middle": [],
"last": "Balkan",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Arnold",
"suffix": ""
}
],
"year": 1996,
"venue": "The 16th International Conference on Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Lehmann, Stephan Oepen, Sylvie Regnier- Prost, Klaus Netter, Veronika Lux, Judith Klein, Kirsten Falkedal, Frederik Fouvry, Dominique Estival, Eva Dauphin, Herve Compagnion, Judith Baur, Lorna Balkan, and Doug Arnold. 1996. TSNLP-Test Suites for Natural Language Processing. In COLING 1996 Volume 2: The 16th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF110": {
"ref_id": "b110",
"title": "Rationalizing Neural Predictions",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "107--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing Neural Predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117. Association for Computational Linguistics.",
"links": null
},
"BIBREF111": {
"ref_id": "b111",
"title": "Separated by an Un-Common Language: Towards Judgment Language Informed Vector Space Modeling",
"authors": [
{
"first": "Ira",
"middle": [],
"last": "Leviant",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.00106v5"
]
},
"num": null,
"urls": [],
"raw_text": "Ira Leviant and Roi Reichart. 2015. Separated by an Un-Common Language: Towards Judgment Language Informed Vector Space Modeling. arXiv preprint arXiv:1508.00106v5.",
"links": null
},
"BIBREF112": {
"ref_id": "b112",
"title": "Visualizing and Understanding Neural Models in NLP",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "681--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016a. Visualizing and Under- standing Neural Models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681-691. Association for Computational Linguistics.",
"links": null
},
"BIBREF113": {
"ref_id": "b113",
"title": "Understanding Neural Networks through Representation Erasure",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.08220v3"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Understanding Neural Networks through Representation Erasure. arXiv preprint arXiv: 1612.08220v3.",
"links": null
},
"BIBREF114": {
"ref_id": "b114",
"title": "Deep Text Classification Can Be Fooled",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Hongcheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Miaoqiang",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Pan",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Xirong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenchang",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18",
"volume": "",
"issue": "",
"pages": "4208--4215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep Text Classification Can Be Fooled. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 4208-4215. International Joint Conferences on Artificial Intelligence Organization.",
"links": null
},
"BIBREF115": {
"ref_id": "b115",
"title": "Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Depen- dencies. Transactions of the Association for Computational Linguistics, 4:521-535.",
"links": null
},
"BIBREF116": {
"ref_id": "b116",
"title": "The Mythos of Model Interpretability",
"authors": [
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
}
],
"year": 2016,
"venue": "ICML Workshop on Human Interpretability of Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zachary C. Lipton. 2016. The Mythos of Model Interpretability. In ICML Workshop on Human Interpretability of Machine Learning.",
"links": null
},
"BIBREF117": {
"ref_id": "b117",
"title": "LSTMs Exploit Linguistic Attributes of Data",
"authors": [
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Chenhao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The Third Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "180--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nelson F. Liu, Omer Levy, Roy Schwartz, Chenhao Tan, and Noah A. Smith. 2018. LSTMs Exploit Linguistic Attributes of Data. In Proceedings of The Third Workshop on Rep- resentation Learning for NLP, pages 180-186. Association for Computational Linguistics.",
"links": null
},
"BIBREF118": {
"ref_id": "b118",
"title": "Delving into Transferable Adversarial Examples and Black-Box Attacks",
"authors": [
{
"first": "Yanpei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xinyun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into Transferable Adversarial Examples and Black-Box Attacks. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF119": {
"ref_id": "b119",
"title": "Better Word Representations with Recursive Neural Networks for Morphology",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Richard Socher, and Christopher Manning. 2013. Better Word Representations with Recursive Neural Networks for Mor- phology. In Proceedings of the Seventeenth Conference on Computational Natural Lan- guage Learning, pages 104-113. Association for Computational Linguistics.",
"links": null
},
"BIBREF120": {
"ref_id": "b120",
"title": "Latent Tree Learning with Differentiable Parsers: Shift-Reduce Parsing and Chart Parsing",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Maillard",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP",
"volume": "",
"issue": "",
"pages": "13--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Maillard and Stephen Clark. 2018. Latent Tree Learning with Differentiable Parsers: Shift-Reduce Parsing and Chart Parsing. In Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP, pages 13-18. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF121": {
"ref_id": "b121",
"title": "SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. SemEval- 2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sen- tences through Semantic Relatedness and Textual Entailment. In Proceedings of the 8th International Workshop on Semantic Eval- uation (SemEval 2014), pages 1-8. Association for Computational Linguistics.",
"links": null
},
"BIBREF122": {
"ref_id": "b122",
"title": "Revisiting the poverty of the stimulus: Hierarchical generalization without a hierarchical bias in recurrent neural networks",
"authors": [
{
"first": "R",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 40th Annual Conference of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: Hierarchical generalization without a hierarchical bias in recurrent neural networks. In Proceedings of the 40th Annual Conference of the Cognitive Science Society.",
"links": null
},
"BIBREF123": {
"ref_id": "b123",
"title": "Natural Language Processing with Modular Pdp Networks and Distributed Lexicon",
"authors": [
{
"first": "Risto",
"middle": [],
"last": "Miikkulainen",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"G"
],
"last": "Dyer",
"suffix": ""
}
],
"year": 1991,
"venue": "Cognitive Science",
"volume": "15",
"issue": "3",
"pages": "343--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Risto Miikkulainen and Michael G. Dyer. 1991. Natural Language Processing with Modular Pdp Networks and Distributed Lexicon. Cognitive Science, 15(3):343-399.",
"links": null
},
"BIBREF124": {
"ref_id": "b124",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan\u010dernock\u1ef3",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "Eleventh Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Jan\u010cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF125": {
"ref_id": "b125",
"title": "Understanding Hidden Memories of Recurrent Neural Networks",
"authors": [
{
"first": "Yao",
"middle": [],
"last": "Ming",
"suffix": ""
},
{
"first": "Shaozu",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Ruixiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuanzhe",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Huamin",
"middle": [],
"last": "Qu",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Conference on Visual Analytics Science and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao Ming, Shaozu Cao, Ruixiang Zhang, Zhen Li, Yuanzhe Chen, Yangqiu Song, and Huamin Qu. 2017. Understanding Hidden Memories of Recurrent Neural Networks. In IEEE Conference on Visual Analytics Science and Technology (IEEE VAST 2017).",
"links": null
},
"BIBREF126": {
"ref_id": "b126",
"title": "Methods for interpreting and understanding deep neural networks",
"authors": [
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2018,
"venue": "Digital Signal Processing",
"volume": "73",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gr\u00e9goire Montavon, Wojciech Samek, and Klaus- Robert M\u00fcller. 2018. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73:1-15.",
"links": null
},
"BIBREF127": {
"ref_id": "b127",
"title": "Did the Model Understand the Question?",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Pramod Kaushik Mudrakarta",
"suffix": ""
},
{
"first": "Mukund",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Kedar",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dhamdhere",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1896--1906",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. Did the Model Understand the Question? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1896-1906. Association for Computational Linguistics.",
"links": null
},
"BIBREF128": {
"ref_id": "b128",
"title": "Explainable Prediction of Medical Codes from Clinical Text",
"authors": [
{
"first": "James",
"middle": [],
"last": "Mullenbach",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Duke",
"suffix": ""
},
{
"first": "Jimeng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1101--1111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable Prediction of Medical Codes from Clinical Text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1101-1111. Association for Computational Linguistics.",
"links": null
},
"BIBREF129": {
"ref_id": "b129",
"title": "Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs",
"authors": [
{
"first": "W",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Murdoch",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. James Murdoch, Peter J. Liu, and Bin Yu. 2018. Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs. In International Conference on Learning Representations.",
"links": null
},
"BIBREF130": {
"ref_id": "b130",
"title": "Learning Effective and Interpretable Semantic Models Using Non-Negative Sparse Embedding",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2012,
"venue": "The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "1933--1950",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Murphy, Partha Talukdar, and Tom Mitchell. 2012. Learning Effective and Interpretable Semantic Models Using Non- Negative Sparse Embedding. In Proceedings of COLING 2012, pages 1933-1950. The COLING 2012 Organizing Committee.",
"links": null
},
"BIBREF131": {
"ref_id": "b131",
"title": "Exploring How Deep Neural Networks Form Phonemic Categories",
"authors": [
{
"first": "Tasha",
"middle": [],
"last": "Nagamine",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"L"
],
"last": "Seltzer",
"suffix": ""
},
{
"first": "Nima",
"middle": [],
"last": "Mesgarani",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tasha Nagamine, Michael L. Seltzer, and Nima Mesgarani. 2015. Exploring How Deep Neural Networks Form Phonemic Categories. In Interspeech 2015.",
"links": null
},
"BIBREF132": {
"ref_id": "b132",
"title": "On the Role of Nonlinear Transformations in Deep Neural Network Acoustic Models",
"authors": [
{
"first": "Tasha",
"middle": [],
"last": "Nagamine",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"L"
],
"last": "Seltzer",
"suffix": ""
},
{
"first": "Nima",
"middle": [],
"last": "Mesgarani",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "803--807",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tasha Nagamine, Michael L. Seltzer, and Nima Mesgarani. 2016. On the Role of Non- linear Transformations in Deep Neural Net- work Acoustic Models. In Interspeech 2016, pages 803-807.",
"links": null
},
"BIBREF133": {
"ref_id": "b133",
"title": "Stress Test Evaluation for Natural Language Inference",
"authors": [
{
"first": "Aakanksha",
"middle": [],
"last": "Naik",
"suffix": ""
},
{
"first": "Abhilasha",
"middle": [],
"last": "Ravichander",
"suffix": ""
},
{
"first": "Norman",
"middle": [],
"last": "Sadeh",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2340--2353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress Test Evaluation for Natural Language Inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353. Association for Computational Linguistics.",
"links": null
},
"BIBREF134": {
"ref_id": "b134",
"title": "Simple Black-Box Adversarial Attacks on Deep Neural Networks",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Narodytska",
"suffix": ""
},
{
"first": "Shiva",
"middle": [],
"last": "Kasiviswanathan",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)",
"volume": "",
"issue": "",
"pages": "1310--1318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina Narodytska and Shiva Kasiviswanathan. 2017. Simple Black-Box Adversarial Attacks on Deep Neural Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1310-1318.",
"links": null
},
"BIBREF135": {
"ref_id": "b135",
"title": "Distributed representations for extended syntactic transformation",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Niklasson",
"suffix": ""
},
{
"first": "Fredrik",
"middle": [],
"last": "Lin\u00e5ker",
"suffix": ""
}
],
"year": 2000,
"venue": "Connection Science",
"volume": "12",
"issue": "3-4",
"pages": "299--314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Niklasson and Fredrik Lin\u00e5ker. 2000. Distributed representations for extended syn- tactic transformation. Connection Science, 12(3-4):299-314.",
"links": null
},
"BIBREF136": {
"ref_id": "b136",
"title": "Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "486--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong Niu and Mohit Bansal. 2018. Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 486-496. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF137": {
"ref_id": "b137",
"title": "Transferability in Machine Learning: From Phenomena to Black-Box Attacks Using Adversarial Samples",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Papernot",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Mcdaniel",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.07277v1"
]
},
"num": null,
"urls": [],
"raw_text": "Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in Machine Learning: From Phenomena to Black-Box Attacks Using Adversarial Samples. arXiv preprint arXiv:1605.07277v1.",
"links": null
},
"BIBREF138": {
"ref_id": "b138",
"title": "Practical Black-Box Attacks Against Machine Learning",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Papernot",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Mcdaniel",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Somesh",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Berkay Celik",
"suffix": ""
},
{
"first": "Ananthram",
"middle": [],
"last": "Swami",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS '17",
"volume": "",
"issue": "",
"pages": "506--519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical Black- Box Attacks Against Machine Learning. In Proceedings of the 2017 ACM on Asia Con- ference on Computer and Communications Security, ASIA CCS '17, pages 506-519, New York, NY, USA, ACM.",
"links": null
},
"BIBREF139": {
"ref_id": "b139",
"title": "Crafting Adversarial Input Sequences for Recurrent Neural Networks",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Papernot",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Mcdaniel",
"suffix": ""
},
{
"first": "Ananthram",
"middle": [],
"last": "Swami",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Harang",
"suffix": ""
}
],
"year": 2016,
"venue": "Military Communications Conference, MILCOM 2016",
"volume": "",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. 2016. Crafting Adversarial Input Sequences for Recurrent Neural Networks. In Military Communications Conference, MILCOM 2016, pages 49-54. IEEE.",
"links": null
},
"BIBREF140": {
"ref_id": "b140",
"title": "Multimodal Explanations: Justifying Decisions and Pointing to the Evidence",
"authors": [
{
"first": "Lisa",
"middle": [
"Anne"
],
"last": "Dong Huk Park",
"suffix": ""
},
{
"first": "Zeynep",
"middle": [],
"last": "Hendricks",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Akata",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Schiele",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rohrbach",
"suffix": ""
}
],
"year": 2018,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, and Marcus Rohrbach. 2018. Multi- modal Explanations: Justifying Decisions and Pointing to the Evidence. In The IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF141": {
"ref_id": "b141",
"title": "Rotated Word Vector Representations and Their Interpretability",
"authors": [
{
"first": "Sungjoon",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Jinyeong",
"middle": [],
"last": "Bak",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Oh",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "401--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungjoon Park, JinYeong Bak, and Alice Oh. 2017. Rotated Word Vector Representations and Their Interpretability. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 401-411. Association for Computational Linguistics.",
"links": null
},
"BIBREF142": {
"ref_id": "b142",
"title": "Dissecting Contextual Word Embeddings: Architecture and Representation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1499--1509",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissect- ing Contextual Word Embeddings: Architecture and Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 1499-1509. Association for Computational Linguistics.",
"links": null
},
"BIBREF143": {
"ref_id": "b143",
"title": "Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Aparajita",
"middle": [],
"last": "Haldar",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "J",
"middle": [
"Edward"
],
"last": "Hu",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"Steven"
],
"last": "White",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "67--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018a. Col- lecting Diverse Natural Language Inference Problems for Sentence Representation Evalu- ation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 67-81. Association for Computational Linguistics.",
"links": null
},
"BIBREF144": {
"ref_id": "b144",
"title": "Hypothesis Only Baselines in Natural Language Inference",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Aparajita",
"middle": [],
"last": "Haldar",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "180--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis Only Baselines in Natural Language Inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180-191. Association for Computational Linguistics.",
"links": null
},
"BIBREF145": {
"ref_id": "b145",
"title": "Recursive distributed representations",
"authors": [
{
"first": "Jordan",
"middle": [
"B"
],
"last": "Pollack",
"suffix": ""
}
],
"year": 1990,
"venue": "Artificial Intelligence",
"volume": "46",
"issue": "1",
"pages": "77--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan B. Pollack. 1990. Recursive distrib- uted representations. Artificial Intelligence, 46(1):77-105.",
"links": null
},
"BIBREF146": {
"ref_id": "b146",
"title": "Analyzing Linguistic Knowledge in Sequential Model of Sentence",
"authors": [
{
"first": "Xipeng",
"middle": [],
"last": "Peng Qian",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "826--835",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016a. Analyzing Linguistic Knowledge in Sequential Model of Sentence. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 826-835, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF147": {
"ref_id": "b147",
"title": "Investigating Language Universal and Specific Properties in Word Embeddings",
"authors": [
{
"first": "Xipeng",
"middle": [],
"last": "Peng Qian",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1478--1488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016b. Investigating Language Universal and Specific Properties in Word Embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1478-1488, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF148": {
"ref_id": "b148",
"title": "Semantically Equivalent Adversarial Rules for Debugging NLP models",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "856--865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically Equivalent Adversarial Rules for Debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856-865. Association for Computational Linguistics.",
"links": null
},
"BIBREF149": {
"ref_id": "b149",
"title": "Debugging Neural Machine Translations",
"authors": [
{
"first": "Mat\u012bss",
"middle": [],
"last": "Rikters",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.02733v1"
]
},
"num": null,
"urls": [],
"raw_text": "Mat\u012bss Rikters. 2018. Debugging Neural Ma- chine Translations. arXiv preprint arXiv:1808. 02733v1.",
"links": null
},
"BIBREF150": {
"ref_id": "b150",
"title": "Improving Word Sense Disambiguation in Neural Machine Translation with Sense Embeddings",
"authors": [
{
"first": "Annette Rios",
"middle": [],
"last": "Gonzales",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Mascarell",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annette Rios Gonzales, Laura Mascarell, and Rico Sennrich. 2017. Improving Word Sense Disambiguation in Neural Machine Translation with Sense Embeddings. In Proceedings of the Second Conference on Machine Translation, pages 11-19. Association for Computational Linguistics.",
"links": null
},
"BIBREF151": {
"ref_id": "b151",
"title": "Reasoning about Entailment with Neural Attention",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u1ef3",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, and Phil Blunsom. 2016. Reasoning about Entailment with Neural Attention. In International Con- ference on Learning Representations (ICLR).",
"links": null
},
"BIBREF152": {
"ref_id": "b152",
"title": "Adversarial Diversity and Hard Positive Generation",
"authors": [
{
"first": "Andras",
"middle": [],
"last": "Rozsa",
"suffix": ""
},
{
"first": "Ethan",
"middle": [
"M"
],
"last": "Rudd",
"suffix": ""
},
{
"first": "Terrance",
"middle": [
"E"
],
"last": "Boult",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andras Rozsa, Ethan M. Rudd, and Terrance E. Boult. 2016. Adversarial Diversity and Hard Positive Generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 25-32.",
"links": null
},
"BIBREF153": {
"ref_id": "b153",
"title": "Gender Bias in Coreference Resolution",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "8--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender Bias in Coreference Resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14. Association for Computational Linguistics.",
"links": null
},
"BIBREF154": {
"ref_id": "b154",
"title": "Parallel Distributed Processing: Explorations in the Microstructure of Cognition",
"authors": [
{
"first": "D",
"middle": [
"E"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Mcclelland",
"suffix": ""
}
],
"year": 1986,
"venue": "chapter On Leaning the Past Tenses of English Verbs",
"volume": "2",
"issue": "",
"pages": "216--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. E. Rumelhart and J. L. McClelland. 1986. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. volume 2, chapter On Leaning the Past Tenses of English Verbs, pages 216-271. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF155": {
"ref_id": "b155",
"title": "A Neural Attention Model for Abstractive Sentence Summarization",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "379--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Pro- cessing, pages 379-389. Association for Computational Linguistics.",
"links": null
},
"BIBREF156": {
"ref_id": "b156",
"title": "Robsut Wrod Reocginiton via Semi-Character Recurrent Neural Network",
"authors": [
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3281--3287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keisuke Sakaguchi, Kevin Duh, Matt Post, and Benjamin Van Durme. 2017. Robsut Wrod Reocginiton via Semi-Character Recurrent Neural Network. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 3281-3287. AAAI Press.",
"links": null
},
"BIBREF158": {
"ref_id": "b158",
"title": "Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Sanchez",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1975--1985",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Sanchez, Jeff Mitchell, and Sebastian Riedel. 2018. Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1975-1985. Association for Computational Linguistics.",
"links": null
},
"BIBREF159": {
"ref_id": "b159",
"title": "Interpretable Adversarial Perturbation in Input Embedding Space for Text",
"authors": [
{
"first": "Motoki",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18",
"volume": "",
"issue": "",
"pages": "4323--4330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable Adversar- ial Perturbation in Input Embedding Space for Text. In Proceedings of the Twenty-Seventh International Joint Conference on Artifi- cial Intelligence, IJCAI-18, pages 4323-4330. International Joint Conferences on Artificial Intelligence Organization.",
"links": null
},
"BIBREF160": {
"ref_id": "b160",
"title": "Semantic Structure and Interpretability of Word Embeddings",
"authors": [
{
"first": "Ihsan",
"middle": [],
"last": "Lutfi Kerem Senel",
"suffix": ""
},
{
"first": "Veysel",
"middle": [],
"last": "Utlu",
"suffix": ""
},
{
"first": "Aykut",
"middle": [],
"last": "Yucesoy",
"suffix": ""
},
{
"first": "Tolga",
"middle": [],
"last": "Koc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cukur",
"suffix": ""
}
],
"year": 2018,
"venue": "Speech, and Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lutfi Kerem Senel, Ihsan Utlu, Veysel Yucesoy, Aykut Koc, and Tolga Cukur. 2018. Se- mantic Structure and Interpretability of Word Embeddings. IEEE/ACM Transactions on Audio, Speech, and Language Processing.",
"links": null
},
"BIBREF161": {
"ref_id": "b161",
"title": "How Grammatical Is Character-Level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "376--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich. 2017. How Grammatical Is Character-Level Neural Machine Translation? Assessing MT Quality with Contrastive Trans- lation Pairs. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 376-382. Association for Computational Linguistics.",
"links": null
},
"BIBREF162": {
"ref_id": "b162",
"title": "Learning Visually-Grounded Semantics from Contrastive Adversarial Samples",
"authors": [
{
"first": "Haoyue",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jiayuan",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Tete",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yuning",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3715--3727",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoyue Shi, Jiayuan Mao, Tete Xiao, Yuning Jiang, and Jian Sun. 2018. Learning Visually- Grounded Semantics from Contrastive Adver- sarial Samples. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3715-3727. Association for Computational Linguistics.",
"links": null
},
"BIBREF163": {
"ref_id": "b163",
"title": "Why Neural Translations are the Right Length",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2278--2282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Shi, Kevin Knight, and Deniz Yuret. 2016a. Why Neural Translations are the Right Length. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2278-2282. Association for Computational Linguistics.",
"links": null
},
"BIBREF164": {
"ref_id": "b164",
"title": "Does String-Based Neural MT Learn Source Syntax?",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Inkit",
"middle": [],
"last": "Padhi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1526--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016b. Does String-Based Neural MT Learn Source Syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526-1534, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF165": {
"ref_id": "b165",
"title": "Hierarchical interpretations for neural network predictions",
"authors": [
{
"first": "Chandan",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "W",
"middle": [
"James"
],
"last": "Murdoch",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.05337v1"
]
},
"num": null,
"urls": [],
"raw_text": "Chandan Singh, W. James Murdoch, and Bin Yu. 2018. Hierarchical interpretations for neural network predictions. arXiv preprint arXiv:1806.05337v1.",
"links": null
},
"BIBREF166": {
"ref_id": "b166",
"title": "Seq2Seq-Vis: A Visual Debugging Tool for Sequenceto-Sequence Models",
"authors": [
{
"first": "Hendrik",
"middle": [],
"last": "Strobelt",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Behrisch",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Perer",
"suffix": ""
},
{
"first": "Hanspeter",
"middle": [],
"last": "Pfister",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.09299v1"
]
},
"num": null,
"urls": [],
"raw_text": "Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, and Alexander M. Rush. 2018a. Seq2Seq-Vis: A Visual Debugging Tool for Sequence- to-Sequence Models. arXiv preprint arXiv: 1804.09299v1.",
"links": null
},
"BIBREF167": {
"ref_id": "b167",
"title": "LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks",
"authors": [
{
"first": "Hendrik",
"middle": [],
"last": "Strobelt",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Hanspeter",
"middle": [],
"last": "Pfister",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Transactions on Visualization and Computer Graphics",
"volume": "24",
"issue": "1",
"pages": "667--676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hendrik Strobelt, Sebastian Gehrmann, Hanspeter Pfister, and Alexander M. Rush. 2018b. LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks. IEEE Transactions on Visualization and Computer Graphics, 24(1):667-676.",
"links": null
},
"BIBREF168": {
"ref_id": "b168",
"title": "Axiomatic Attribution for Deep Networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "3319--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th Inter- national Conference on Machine Learning, Volume 70 of Proceedings of Machine Learn- ing Research, pages 3319-3328, International Convention Centre, Sydney, Australia. PMLR.",
"links": null
},
"BIBREF169": {
"ref_id": "b169",
"title": "Sequence to Sequence Learning with Neural Networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in neural infor- mation processing systems, pages 3104-3112.",
"links": null
},
"BIBREF170": {
"ref_id": "b170",
"title": "On Evaluating the Generalization of LSTM Models in Formal Languages",
"authors": [
{
"first": "Mirac",
"middle": [],
"last": "Suzgun",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Stuart",
"middle": [
"M"
],
"last": "Shieber",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Society for Computation in Linguistics (SCiL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirac Suzgun, Yonatan Belinkov, and Stuart M. Shieber. 2019. On Evaluating the Generaliza- tion of LSTM Models in Formal Languages. In Proceedings of the Society for Computation in Linguistics (SCiL).",
"links": null
},
"BIBREF171": {
"ref_id": "b171",
"title": "Intriguing properties of neural networks",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF172": {
"ref_id": "b172",
"title": "An Analysis of Attention Mechanisms: The Case of Word Sense Disambiguation in Neural Machine Translation",
"authors": [
{
"first": "Gongbo",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "26--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gongbo Tang, Rico Sennrich, and Joakim Nivre. 2018. An Analysis of Attention Mechanisms: The Case of Word Sense Disambiguation in Neural Machine Translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 26-35. Association for Computational Linguistics.",
"links": null
},
"BIBREF173": {
"ref_id": "b173",
"title": "CoupleNet: Paying Attention to Couples with Coupled Attention for Relationship Recommendation",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Tay",
"suffix": ""
},
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Luu",
"suffix": ""
},
{
"first": "Siu Cheung",
"middle": [],
"last": "Hui",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018. CoupleNet: Paying Attention to Couples with Coupled Attention for Relationship Rec- ommendation. In Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM).",
"links": null
},
"BIBREF174": {
"ref_id": "b174",
"title": "The Importance of Being Recurrent for Modeling Hierarchical Structure",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4731--4736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke Tran, Arianna Bisazza, and Christof Monz. 2018. The Importance of Being Recurrent for Modeling Hierarchical Structure. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4731-4736. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF175": {
"ref_id": "b175",
"title": "Investigating ''Aspect'' in NMT and SMT: Translating the English Simple Past and Present Perfect",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Vanmassenhove",
"suffix": ""
},
{
"first": "Jinhua",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics in the Netherlands Journal",
"volume": "7",
"issue": "",
"pages": "109--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Vanmassenhove, Jinhua Du, and Andy Way. 2017. Investigating ''Aspect'' in NMT and SMT: Translating the English Simple Past and Present Perfect. Computational Linguistics in the Netherlands Journal, 7:109-128.",
"links": null
},
"BIBREF176": {
"ref_id": "b176",
"title": "Diagnostic Classifiers: Revealing How Neural Networks Process Hierarchical Structure",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Veldhoen",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2016,
"venue": "CEUR Workshop Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Veldhoen, Dieuwke Hupkes, and Willem Zuidema. 2016. Diagnostic Classifiers: Reveal- ing How Neural Networks Process Hierarchical Structure. In CEUR Workshop Proceedings.",
"links": null
},
"BIBREF177": {
"ref_id": "b177",
"title": "Context-Aware Neural Machine Translation Learns Anaphora Resolution",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Serdyukov",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1264--1274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-Aware Neural Machine Translation Learns Anaphora Resolu- tion. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 1264-1274. Association for Computational Linguistics.",
"links": null
},
"BIBREF178": {
"ref_id": "b178",
"title": "Word Representation Models for Morphologically Rich Languages in Neural Machine Translation",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Xuanli",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.04217v1"
]
},
"num": null,
"urls": [],
"raw_text": "Ekaterina Vylomova, Trevor Cohn, Xuanli He, and Gholamreza Haffari. 2016. Word Representation Models for Morphologically Rich Languages in Neural Machine Translation. arXiv preprint arXiv:1606.04217v1.",
"links": null
},
"BIBREF179": {
"ref_id": "b179",
"title": "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amapreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07461v1"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018a. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Under- standing. arXiv preprint arXiv:1804.07461v1.",
"links": null
},
"BIBREF180": {
"ref_id": "b180",
"title": "What Does the Speaker Embedding Encode?",
"authors": [
{
"first": "Shuai",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Interspeech 2017",
"volume": "",
"issue": "",
"pages": "1497--1501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuai Wang, Yanmin Qian, and Kai Yu. 2017a. What Does the Speaker Embedding Encode? In Interspeech 2017, pages 1497-1501.",
"links": null
},
"BIBREF181": {
"ref_id": "b181",
"title": "A Tree-Based Decoder for Neural Machine Translation",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyi Wang, Hieu Pham, Pengcheng Yin, and Graham Neubig. 2018b. A Tree-Based Decoder for Neural Machine Translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Brussels, Belgium.",
"links": null
},
"BIBREF182": {
"ref_id": "b182",
"title": "Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries",
"authors": [
{
"first": "Yu-Hsuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Cheng-Tao",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Hung-Yi",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu-Hsuan Wang, Cheng-Tao Chung, and Hung-yi Lee. 2017b. Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries. In Interspeech 2017.",
"links": null
},
"BIBREF183": {
"ref_id": "b183",
"title": "On the Practical Computational Power of Finite Precision RNNs for Language Recognition",
"authors": [
{
"first": "Gail",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Eran",
"middle": [],
"last": "Yahav",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "740--745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the Practical Computational Power of Finite Precision RNNs for Language Recognition. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 740-745. Association for Computational Linguistics.",
"links": null
},
"BIBREF184": {
"ref_id": "b184",
"title": "Do latent tree learning models identify meaningful structure in sentences",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Drozdov",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "253--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018. Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association for Compu- tational Linguistics, 6:253-267.",
"links": null
},
"BIBREF185": {
"ref_id": "b185",
"title": "Investigating gated recurrent networks for speech synthesis",
"authors": [
{
"first": "Zhizheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5140--5144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhizheng Wu and Simon King. 2016. Inves- tigating gated recurrent networks for speech synthesis. In 2016 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 5140-5144. IEEE.",
"links": null
},
"BIBREF186": {
"ref_id": "b186",
"title": "Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data",
"authors": [
{
"first": "Puyudi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jianbo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Jane-Ling",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.12316v1"
]
},
"num": null,
"urls": [],
"raw_text": "Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane- Ling Wang, and Michael I. Jordan. 2018. Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data. arXiv preprint arXiv:1805.12316v1.",
"links": null
},
"BIBREF187": {
"ref_id": "b187",
"title": "ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "259--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin, Hinrich Sch\u00fctze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs. Transactions of the Association for Computational Linguistics, 4:259-272.",
"links": null
},
"BIBREF188": {
"ref_id": "b188",
"title": "Adversarial Examples: Attacks and Defenses for Deep Learning",
"authors": [
{
"first": "Xiaoyong",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Pan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Qile",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Xiaolin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.07107v3"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. 2017. Adversarial Examples: Attacks and Defenses for Deep Learning. arXiv preprint arXiv:1712.07107v3.",
"links": null
},
"BIBREF189": {
"ref_id": "b189",
"title": "Using ''Annotator Rationales'' to Improve Machine Learning for Text Categorization",
"authors": [
{
"first": "Omar",
"middle": [],
"last": "Zaidan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Piatko",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference",
"volume": "",
"issue": "",
"pages": "260--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using ''Annotator Rationales'' to Improve Machine Learning for Text Cate- gorization. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics; Proceedings of the Main Conference, pages 260-267. Association for Computational Linguistics.",
"links": null
},
"BIBREF190": {
"ref_id": "b190",
"title": "Visual interpretability for deep learning: A survey",
"authors": [
{
"first": "Song-Chun",
"middle": [],
"last": "Quan-Shi Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "Frontiers of Information Technology & Electronic Engineering",
"volume": "19",
"issue": "1",
"pages": "27--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quan-shi Zhang and Song-chun Zhu. 2018. Visual interpretability for deep learning: A survey. Frontiers of Information Technology & Electronic Engineering, 19(1):27-39.",
"links": null
},
"BIBREF191": {
"ref_id": "b191",
"title": "Rationale-Augmented Convolutional Neural Networks for Text Classification",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Iain",
"middle": [],
"last": "Marshall",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "795--804",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Zhang, Iain Marshall, and Byron C. Wallace. 2016. Rationale-Augmented Convolutional Neural Networks for Text Classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 795-804. Association for Computational Linguistics.",
"links": null
},
"BIBREF192": {
"ref_id": "b192",
"title": "Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20. Association for Computational Linguistics.",
"links": null
},
"BIBREF193": {
"ref_id": "b193",
"title": "Adversarially Regularized Autoencoders",
"authors": [
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Kelly",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "5902--5911",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, and Yann LeCun. 2018b. Adversarially Regularized Autoencoders. In Proceedings of the 35th International Conference on Machine Learning, Volume 80 of Proceedings of Ma- chine Learning Research, pages 5902-5911, Stockholmsm\u00e4ssan, Stockholm, Sweden. PMLR.",
"links": null
},
"BIBREF194": {
"ref_id": "b194",
"title": "Generating Natural Adversarial Examples",
"authors": [
{
"first": "Zhengli",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Dheeru",
"middle": [],
"last": "Dua",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018c. Generating Natural Adversarial Examples. In International Conference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A heatmap visualizing neuron activations. In this case, the activations capture position in the sentence.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "A visualization of attention weights, showing soft alignment between source and target sentences in an NMT model. Reproduced from Bahdanau et al. (2014), with permission.",
"num": null,
"uris": null,
"type_str": "figure"
}
}
}
}