ACL-OCL / Base_JSON /prefixQ /json /Q18 /Q18-1047.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q18-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:10:37.191340Z"
},
"title": "Attentive Convolution: Equipping CNNs with RNN-style Attention Mechanisms",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": "wenpeng@seas.upenn.edu"
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "Q18-1047",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "In NLP, convolutional neural networks (CNNs) have benefited less than recurrent neural networks (RNNs) from attention mechanisms. We hypothesize that this is because the attention in CNNs has been mainly implemented as attentive pooling (i.e., it is applied to pooling) rather than as attentive convolution (i.e., it is integrated into convolution). Convolution is the differentiator of CNNs in that it can powerfully model the higher-level representation of a word by taking into account its local fixed-size context in the input text t x . In this work, we propose an attentive convolution network, ATTCONV. It extends the context scope of the convolution operation, deriving higherlevel features for a word not only from local context, but also from information extracted from nonlocal context by the attention mechanism commonly used in RNNs. This nonlocal context can come (i) from parts of the input text t x that are distant or (ii) from extra (i.e., external) contexts t y . Experiments on sentence modeling with zero-context (sentiment analysis), singlecontext (textual entailment) and multiplecontext (claim verification) demonstrate the effectiveness of ATTCONV in sentence representation learning with the incorporation of context. In particular, attentive convolution outperforms attentive pooling and is a strong competitor to popular attentive RNNs. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Natural language processing (NLP) has benefited greatly from the resurgence of deep neural networks (DNNs), thanks to their high performance with less need of engineered features. A DNN typically is composed of a stack of non-linear trans-1 https://github.com/yinwenpeng/Attentive_ Convolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "formation layers, each generating a hidden representation for the input by projecting the output of a preceding layer into a new space. To date, building a single and static representation to express an input across diverse problems is far from satisfactory. Instead, it is preferable that the representation of the input vary in different application scenarios. In response, attention mechanisms (Graves, 2013; Graves et al., 2014) have been proposed to dynamically focus on parts of the input that are expected to be more specific to the problem. They are mostly implemented based on fine-grained alignments between two pieces of objects, each emitting a dynamic soft-selection to the components of the other, so that the selected elements dominate in the output hidden representation. Attention-based DNNs have demonstrated good performance on many tasks.",
"cite_spans": [
{
"start": 397,
"end": 411,
"text": "(Graves, 2013;",
"ref_id": "BIBREF11"
},
{
"start": 412,
"end": 432,
"text": "Graves et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Convolutional neural networks (CNNs; LeCun et al., 1998) and recurrent neural networks (RNNs; Elman, 1990) are two important types of DNNs. Most work on attention has been done for RNNs. Attention-based RNNs typically take three types of inputs to make a decision at the current step: (i) the current input state, (ii) a representation of local context (computed unidirectionally or bidirectionally; Rockt\u00e4schel et al. [2016] ), and (iii) the attention-weighted sum of hidden states corresponding to nonlocal context (e.g., the hidden states of the encoder in neural machine translation; Bahdanau et al. [2015] ). An important question, therefore, is whether CNNs can benefit from such an attention mechanism as well, and how. This is our technical motivation.",
"cite_spans": [
{
"start": 30,
"end": 36,
"text": "(CNNs;",
"ref_id": null
},
{
"start": 37,
"end": 56,
"text": "LeCun et al., 1998)",
"ref_id": "BIBREF21"
},
{
"start": 94,
"end": 106,
"text": "Elman, 1990)",
"ref_id": "BIBREF9"
},
{
"start": 400,
"end": 425,
"text": "Rockt\u00e4schel et al. [2016]",
"ref_id": "BIBREF36"
},
{
"start": 588,
"end": 610,
"text": "Bahdanau et al. [2015]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our second motivation is natural language understanding. In generic sentence modeling without extra context (Collobert et al., 2011; Kalchbrenner et al., 2014; Kim, 2014) , CNNs learn sentence representations by composing word representations that are conditioned on a local context window. We believe that attentive convolution is needed Table 1 : Examples of four premises for the hypothesis t x = \"A cell wall is not present in animal cells.\" in SCITAIL data set. Right column (hypothesis's label): \"1\" means true, \"0\" otherwise.",
"cite_spans": [
{
"start": 108,
"end": 132,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF6"
},
{
"start": 133,
"end": 159,
"text": "Kalchbrenner et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 160,
"end": 170,
"text": "Kim, 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 339,
"end": 346,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "for some natural language understanding tasks that are essentially sentence modeling within contexts. Examples: textual entailment (is a hypothesis true given a premise as the single context?; Dagan et al. [2013] ) and claim verification (is a claim correct given extracted evidence snippets from a text corpus as the context?; Thorne et al. [2018] ). Consider the SCITAIL (Khot et al., 2018) textual entailment examples in Table 1 ; here, the input text t x is the hypothesis and each premise is a context text t y . And consider the illustration of claim verification in Figure 1 ; here, the input text t x is the claim and t y can consist of multiple pieces of context. In both cases, we would like the representation of t x to be context-specific. In this work, we propose attentive convolution networks, ATTCONV, to model a sentence (i.e., t x ) either in intra-context (where t y = t x ) or extracontext (where t y = t x and t y can have many pieces) scenarios. In the intra-context case (sentiment analysis, for example), ATTCONV extends the local context window of standard CNNs to cover the entire input text t x . In the extra-context case, ATTCONV extends the local context window to cover accompanying contexts t y .",
"cite_spans": [
{
"start": 206,
"end": 212,
"text": "[2013]",
"ref_id": null
},
{
"start": 328,
"end": 348,
"text": "Thorne et al. [2018]",
"ref_id": "BIBREF41"
},
{
"start": 373,
"end": 392,
"text": "(Khot et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 424,
"end": 431,
"text": "Table 1",
"ref_id": null
},
{
"start": 573,
"end": 581,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For a convolution operation over a window in t x such as (left context , word, right context ), we first compare the representation of word with all hidden states in the context t y to obtain an attentive context representation att context , then convolution filters derive a higher-level representation for word, denoted as word new , by integrating word with three pieces of context: left context , right context , and att context . We interpret this attentive convolution in two perspectives. (i) For intra-context, a higher-level word representation word new is learned by considering the local (i.e., left context and right context ) as well as nonlocal (i.e., att context ) context. (ii) For extra-context, word new is generated to represent word, together with its cross-text alignment att context , in the context left context and right context . In other words, the decision for the word is made based on the connected hidden states of cross-text aligned terms, with local context. We apply ATTCONV to three sentence modeling tasks with variable-size context: a largescale Yelp sentiment classification task (Lin et al., 2017 ) (intra-context, i.e., no additional context), SCITAIL textual entailment (Khot et al., 2018) (single extra-context), and claim verification (Thorne et al., 2018 ) (multiple extra-contexts). ATTCONV outperforms competitive DNNs with and without attention and achieves state-of-the-art on the three tasks.",
"cite_spans": [
{
"start": 1117,
"end": 1134,
"text": "(Lin et al., 2017",
"ref_id": "BIBREF24"
},
{
"start": 1210,
"end": 1229,
"text": "(Khot et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 1277,
"end": 1297,
"text": "(Thorne et al., 2018",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Overall, we make the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 This is the first work that equips convolution filters with the attention mechanism commonly used in RNNs. \u2022 We distinguish and build flexible modulesattention source, attention focus, and attention beneficiary-to greatly advance the expressivity of attention mechanisms in CNNs. \u2022 ATTCONV provides a new way to broaden the originally constrained scope of filters in conventional CNNs. Broader and richer context comes from either external context (i.e., t y ) or the sentence itself (i.e., t x ). \u2022 ATTCONV shows its flexibility and effectiveness in sentence modeling with variablesize context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we discuss attention-related DNNs in NLP, the most relevant work for our paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Graves (2013) and Graves et al. (2014) first introduced a differentiable attention mechanism that allows RNNs to focus on different parts of the input. This idea has been broadly explored in RNNs, shown in Figure 2 , to deal with text generation, such as neural machine translation (Bahdanau et al., 2015; Kim et al., 2017; Libovick\u00fd and Helcl, 2017) , response generation in social media (Shang et al., 2015) , document reconstruction (Li et al., 2015) , and document summarization (Nallapati et al., 2016) ; machine comprehension (Hermann et al., 2015; Kumar et al., 2016; Xiong et al., 2016; Seo et al., 2017; Wang and Jiang, 2017; Xiong et al., 2017; Wang et al., 2017a) ; and sentence relation classification, such as textual entailment (Cheng et al., 2016; Rockt\u00e4schel et al., 2016; Wang and Jiang, 2016; Wang et al., 2017b; Chen et al., 2017b) and answer sentence selection (Miao et al., 2016) . We try to explore the RNN-style attention mechanisms in CNNs-more specifically, in convolution.",
"cite_spans": [
{
"start": 18,
"end": 38,
"text": "Graves et al. (2014)",
"ref_id": "BIBREF12"
},
{
"start": 282,
"end": 305,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 306,
"end": 323,
"text": "Kim et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 324,
"end": 350,
"text": "Libovick\u00fd and Helcl, 2017)",
"ref_id": "BIBREF23"
},
{
"start": 389,
"end": 409,
"text": "(Shang et al., 2015)",
"ref_id": "BIBREF39"
},
{
"start": 436,
"end": 453,
"text": "(Li et al., 2015)",
"ref_id": "BIBREF39"
},
{
"start": 483,
"end": 507,
"text": "(Nallapati et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 532,
"end": 554,
"text": "(Hermann et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 555,
"end": 574,
"text": "Kumar et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 575,
"end": 594,
"text": "Xiong et al., 2016;",
"ref_id": "BIBREF47"
},
{
"start": 595,
"end": 612,
"text": "Seo et al., 2017;",
"ref_id": "BIBREF38"
},
{
"start": 613,
"end": 634,
"text": "Wang and Jiang, 2017;",
"ref_id": "BIBREF44"
},
{
"start": 635,
"end": 654,
"text": "Xiong et al., 2017;",
"ref_id": "BIBREF48"
},
{
"start": 655,
"end": 674,
"text": "Wang et al., 2017a)",
"ref_id": "BIBREF45"
},
{
"start": 742,
"end": 762,
"text": "(Cheng et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 763,
"end": 788,
"text": "Rockt\u00e4schel et al., 2016;",
"ref_id": "BIBREF36"
},
{
"start": 789,
"end": 810,
"text": "Wang and Jiang, 2016;",
"ref_id": "BIBREF43"
},
{
"start": 811,
"end": 830,
"text": "Wang et al., 2017b;",
"ref_id": "BIBREF46"
},
{
"start": 831,
"end": 850,
"text": "Chen et al., 2017b)",
"ref_id": "BIBREF4"
},
{
"start": 881,
"end": 900,
"text": "(Miao et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 206,
"end": 214,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "RNNs with Attention",
"sec_num": "2.1"
},
{
"text": "In NLP, there is little work on attention-based CNNs. Gehring et al. (2017) propose an attentionbased convolutional seq-to-seq model for machine translation. Both the encoder and decoder are hierarchical convolution layers. At the n th layer of the decoder, the output hidden state of a convolution queries each of the encoder-side hidden states, then a weighted sum of all encoder hidden states is added to the decoder hidden state, and finally this updated hidden state is fed to the convolution at layer n + 1. Their attention implementation relies on the existence of a multi-layer convolution structure-otherwise the weighted context from the encoder side could not play a role in the decoder. So essentially their attention is achieved after convolution. In contrast, we aim to modify the vanilla convolution, so that CNNs with attentive convolution can be applied more broadly.",
"cite_spans": [
{
"start": 54,
"end": 75,
"text": "Gehring et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CNNs with Attention",
"sec_num": "2.2"
},
{
"text": "We discuss two systems that are representative of CNNs that implement the attention in pooling (i.e., the convolution is still not affected): Yin et al. (2016) and dos Santos et al. (2016), illustrated in Figure 3 . Specifically, these two systems work on two input sentences, each with a set of matching scores Figure 3 : Attentive pooling, summarized from ABCNN (Yin et al., 2016) and APCNN (dos Santos et al., 2016) .",
"cite_spans": [
{
"start": 142,
"end": 159,
"text": "Yin et al. (2016)",
"ref_id": "BIBREF49"
},
{
"start": 364,
"end": 382,
"text": "(Yin et al., 2016)",
"ref_id": "BIBREF49"
},
{
"start": 387,
"end": 418,
"text": "APCNN (dos Santos et al., 2016)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 205,
"end": 213,
"text": "Figure 3",
"ref_id": null
},
{
"start": 312,
"end": 320,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "CNNs with Attention",
"sec_num": "2.2"
},
{
"text": "hidden states generated by a convolution layer; then, each sentence will learn a weight for every hidden state by comparing this hidden state with all hidden states in the other sentence; finally, each input sentence obtains a representation by a weighted mean pooling over all its hidden states. The core component-weighted mean poolingwas referred to as \"attentive pooling,\" aiming to yield the sentence representation. In contrast to attentive convolution, attentive pooling does not connect directly the hidden states of cross-text aligned phrases in a fine-grained manner to the final decision making; only the matching scores contribute to the final weighting in mean pooling. This important distinction between attentive convolution and attentive pooling is further discussed in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNNs with Attention",
"sec_num": "2.2"
},
{
"text": "Inspired by the attention mechanisms in RNNs, we assume that it is the hidden states of aligned phrases rather than their matching scores that can better contribute to representation learning and decision making. Hence, our attentive convolution differs from attentive pooling in that it uses attended hidden states from extra context (i.e., t y ) or broader-range context within t x to participate in the convolution. In experiments, we will show its superiority.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNNs with Attention",
"sec_num": "2.2"
},
{
"text": "We use bold uppercase (e.g., H) for matrices; bold lowercase (e.g., h) for vectors; bold lowercase with index (e.g., h i ) for columns of H; and non-bold lowercase for scalars. To start, we assume that a piece of text t (t \u2208 {t x , t y }) is represented as a sequence of hidden states h i \u2208 R d (i = 1, 2, . . . , |t|), forming feature map H \u2208 R d\u00d7|t| , where d is the dimensionality of hidden states. Each hidden state h i has its left context l i and right context r i . In concrete CNN systems, contexts l i and r i can cover multiple adjacent hidden states; we set l i = h i\u22121 and r i = h i+1 for simplicity in the following description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ATTCONV Model",
"sec_num": "3"
},
{
"text": "We now describe light and advanced versions of ATTCONV. Recall that ATTCONVaims to compute a representation for t x in a way that convolution filters encode not only local context, but also attentive context over t y .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ATTCONV Model",
"sec_num": "3"
},
{
"text": "Figure 4(a) shows the light version of ATTCONV. It differs in two key points-(i) and (ii)-both from the basic convolution layer that models a single piece of text and from the Siamese CNN that models two text pieces in parallel. (i) A matching function determines how relevant each hidden state in the context t y is to the current hidden state h x i in sentence t x . We then compute an average of the hidden states in the context t y , weighted by the matching scores, to get the attentive context",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "c x i for h x i . (ii)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "The convolution for position i in t x integrates hidden state h x i with three sources of context: left context h x i\u22121 , right context h x i+1 , and attentive context c x i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "Attentive Context. First, a function generates a matching score e i,j between a hidden state in t x and a hidden state in t y by (i) dot product:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e i,j = (h x i ) T \u2022 h y j",
"eq_num": "(1)"
}
],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "or (ii) bilinear form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "e i,j = (h x i ) T W e h y j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "(2) (where W e \u2208 R d\u00d7d ), or (iii) additive projection:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "e i,j = (v e ) T \u2022 tanh(W e \u2022 h x i + U e \u2022 h y j ) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "where W e , U e \u2208 R d\u00d7d and v e \u2208 R d . Given the matching scores, the attentive context c x i for hidden state h x i is the weighted average of all hidden states in t y :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c x i = j softmax(e i ) j \u2022 h y j",
"eq_num": "(4)"
}
],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "We refer to the concatenation of attentive contexts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "[c x 1 ; . . . ; c x i ; . . . ; c x |t x | ] as the feature map C x \u2208 R d\u00d7|t x | for t x .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "Attentive Convolution. After attentive context has been computed, a position i in the sentence t x has a hidden state h x i , the left context h x i\u22121 , the right context h x i+1 , and the attentive context c x i . Attentive convolution then generates the higherlevel hidden state at position i:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h x i,new = tanh(W \u2022 [h x i\u22121 , h x i , h x i+1 , c x i ] + b) (5) = tanh(W 1 \u2022 [h x i\u22121 , h x i , h x i+1 ]+ W 2 \u2022 c x i + b)",
"eq_num": "(6)"
}
],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "where W \u2208 R d\u00d74d is the concatenation of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "W 1 \u2208 R d\u00d73d and W 2 \u2208 R d\u00d7d , b \u2208 R d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "As Equation (6) shows, Equation (5) can be achieved by summing up the results of two separate and parallel convolution steps before the non-linearity. The first is still a standard convolution-without-attention over feature map H x by filter width 3 over the window (h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "x i\u22121 , h x i , h x i+1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "The second is a convolution on the feature map C x (i.e., the attentive context) with filter width 1 (i.e., over each c x i ); then we sum up the role text premise Three firefighters come out of subway station hypothesis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "Three firefighters putting out a fire inside of a subway station results element-wise and add a bias term and the nonlinearity. This divide-then-compose strategy makes the attentive convolution easy to implement in practice, with no need to create a new feature map, as required in Equation 5, to integrate H x and C x . It is worth mentioning that W 1 \u2208 R d\u00d73d corresponds to the filter parameters of a vanilla CNN and the only added parameter here is W 2 \u2208 R d\u00d7d , which only depends on the hidden size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "This light ATTCONV shows the basic principles of using RNN-style attention mechanisms in convolution. Our experiments show that this light version of ATTCONV-even though it incurs a limited increase of parameters (i.e., W 2 )-works much better than the vanilla Siamese CNN and some of the pioneering attentive RNNs. The following two considerations show that there is space to improve its expressivity. (i) Higher-level or more abstract representations are required in subsequent layers. We find that directly forwarding the hidden states in t x or t y to the matching process does not work well in some tasks. Pre-learning some more high-level or abstract representations helps in subsequent learning phases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "(ii) Multi-granular alignments are preferred in the interaction modeling between t x and t y . Table 2 shows another example of textual entailment. On the unigram level, \"out\" in the premise matches with \"out\" in the hypothesis perfectly, whereas \"out\" in the premise is contradictory to \"inside\" in the hypothesis. But their context snippets-\"come out\" in the premise and \"putting out a fire\" in the hypothesis-clearly indicate that they are not semantically equivalent. And the gold conclusion for this pair is \"neutral\" (i.e., the hypothesis is possibly true). Therefore, matching should be conducted across phrase granularities.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "We now present advanced ATTCONV. It is more expressive and modular, based on the two foregoing considerations (i) and (ii).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Light ATTCONV",
"sec_num": "3.1"
},
{
"text": "Adel and Sch\u00fctze (2017) distinguish between focus and source of attention. The focus of atten-tion is the layer of the network that is reweighted by attention weights. The source of attention is the information source that is used to compute the attention weights. Adel and Sch\u00fctze showed that increasing the scope of the attention source is beneficial. It possesses some preliminary principles of the query/key/value distinction by Vaswani et al. (2017) . Here, we further extend this principle to define beneficiary of attention -the feature map (labeled \"beneficiary\" in Figure 4 (b)) that is contextualized by the attentive context (labeled \"attentive context\" in Figure 4(b) ). In the light attentive convolutional layer (Figure 4(a) ), the source of attention is hidden states in sentence t x , the focus of attention is hidden states of the context t y , and the beneficiary of attention is again the hidden states of t x ; that is, it is identical to the source of attention.",
"cite_spans": [
{
"start": 433,
"end": 454,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 574,
"end": 582,
"text": "Figure 4",
"ref_id": "FIGREF5"
},
{
"start": 668,
"end": 679,
"text": "Figure 4(b)",
"ref_id": "FIGREF5"
},
{
"start": 726,
"end": 738,
"text": "(Figure 4(a)",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "We now try to distinguish these three concepts further to promote the expressivity of an attentive convolutional layer. We call it \"advanced ATTCONV\"; see Figure 4 ",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o i = tanh(W h \u2022 i i + b h )",
"eq_num": "(7)"
}
],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "g i = sigmoid(W g \u2022 i i + b g ) (8) f gconv (i i ) = g i \u2022 u i + (1 \u2212 g i ) \u2022 o i (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "where i i is a composed representation, denoting a generally defined input phrase",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "[\u2022 \u2022 \u2022 , u i , \u2022 \u2022 \u2022 ] of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "arbitrary length with u i as the central unigramlevel hidden state, and the gate g i sets a trade-off between the unigram-level input u i and the temporary output o i at the phrase-level. We elaborate these modules in the remainder of this subsection. Attention Source. First, we present a general instance of generating source of attention by function f mgran (H), learning word representations in multi-granular context. In our system, we consider granularities 1 and 3, corresponding to unigram hidden state and trigram hidden state. For the uni-hidden state case, it is a gated convolution layer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h x uni,i = f gconv (h x i )",
"eq_num": "(10)"
}
],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "For the tri-hidden state case:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h x tri,i = f gconv ([h x i\u22121 , h x i , h x i+1 ])",
"eq_num": "(11)"
}
],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "Finally, the overall hidden state at position i is the concatenation of h uni,i and h tri,i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h x mgran,i = [h x uni,i , h x tri,i ]",
"eq_num": "(12)"
}
],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "that is, f mgran (H x ) = H x mgran . Such a kind of comprehensive hidden state can encode the semantics of multigranular spans at a position, such as \"out\" and \"come out of.\" Gating here implicitly enables cross-granular alignments in subsequent attention mechanism as it sets highway connections (Srivastava et al., 2015) between the input granularity and the output granularity.",
"cite_spans": [
{
"start": 298,
"end": 323,
"text": "(Srivastava et al., 2015)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "Attention Focus. For simplicity, we use the same architecture for the attention source (just introduced) and for the attention focus, t y (i.e., for the attention focus: f mgran (H y ) = H y mgran ; see Figure 4 (b)). Thus, the focus of attention will participate in the matching process as well as be reweighted to form an attentive context vector. We leave exploring different architectures for attention source and focus for future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 211,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "Another benefit of multi-granular hidden states in attention focus is to keep structure information in the context vector. In standard attention mechanisms in RNNs, all hidden states are average-weighted as a context vector, and the order information is missing. By introducing hidden states of larger granularity into CNNs that keep the local order or structures, we boost the attentive effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "Attention Beneficiary. In our system, we simply use f gconv () over uni-granularity to learn a more abstract representation over the current hidden representations in H x , so that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f bene (h x i ) = f gconv (h x i )",
"eq_num": "(13)"
}
],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "Subsequently, the attentive context vector c x i is generated based on attention source feature map f mgran (H x ) and attention focus feature map f mgran (H y ), according to the description of the light ATTCONV. Then attentive convolution is conducted over attention beneficiary feature map f bene (H x ) and the attentive context vectors C x to get a higher-layer feature map for the sentence t x .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advanced ATTCONV",
"sec_num": "3.2"
},
{
"text": "Compared with the standard attention mechanism in RNNs, ATTCONV has a similar matching func-tion and a similar process of computing context vectors, but differs in three ways. (i) The discrimination of attention source, focus, and beneficiary improves expressivity. (ii) In CNNs, the surrounding hidden states for a concrete position are available, so the attention matching is able to encode the left context as well as the right context. In RNNs, however, we need bidirectional RNNs to yield both left and right context representations. (iii) As attentive convolution can be implemented by summing up two separate convolution steps (Equations 5 and 6), this architecture provides both attentive representations and representations computed without the use of attention. This strategy is helpful in practice to use richer representations for some NLP problems. In contrast, such a clean modular separation of representations computed with and without attention is harder to realize in attention-based RNNs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.3"
},
{
"text": "Prior attention mechanisms explored in CNNs mostly involve attentive pooling (dos Santos et al., 2016; Yin et al., 2016) ; namely, the weights of the post-convolution pooling layer are determined by attention. These weights come from the matching process between hidden states of two text pieces. However, a weight value is not informative enough to tell the relationships between aligned terms. Consider a textual entailment sentence pair for which we need to determine whether \"inside \u2212\u2192 outside\" holds. The matching degree (take cosine similarity as example) of these two words is high: for example, \u2248 0.7 in Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) . On the other hand, the matching score between \"inside\" and \"in\" is lower: 0.31 in Word2Vec, 0.46 in GloVe. Apparently, the higher number 0.7 does not mean that \"outside\" is more likely than \"in\" to be entailed by \"inside.\" Instead, joint representations for aligned phrases [h inside , h outside ], [h inside , h in ] are more informative and enable finer-grained reasoning than a mechanism that can only transmit information downstream by matching scores. We modify the conventional CNN filters so that \"inside\" can make the entailment decision by looking at the representation of the counterpart term (\"outside\" or \"in\") rather than a matching score.",
"cite_spans": [
{
"start": 77,
"end": 102,
"text": "(dos Santos et al., 2016;",
"ref_id": "BIBREF37"
},
{
"start": 103,
"end": 120,
"text": "Yin et al., 2016)",
"ref_id": "BIBREF49"
},
{
"start": 621,
"end": 643,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 654,
"end": 679,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.3"
},
{
"text": "A more damaging property of attentive pooling is the following. Even if matching scores could convey the phrase-level entailment degree to some extent, matching weights, in fact, are not leveraged to make the entailment decision directly; instead, they are used to weight the sum of the output hidden states of a convolution as the global sentence representation. In other words, fine-grained entailment degrees are likely to be lost in the summation of many vectors. This illustrates why attentive context vectors participating in the convolution operation are expected to be more effective than post-convolution attentive pooling (more explanations in \u00a74.3, paragraph \"Visualization\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.3"
},
{
"text": "Intra-context attention and extra-context attention. Figures 4(a) and 4(b) depict the modeling of a sentence t x with its context t y . This is a common application of attention mechanism in the literature; we call it extra-context attention. But ATTCONV can also be applied to model a single text input, that is, intra-context attention. Consider a sentiment analysis example: \"With the 2017 NBA All-Star game in the books I think we can all agree that this was definitely one to remember. Not because of the three-point shootout, the dunk contest, or the game itself but because of the ludicrous trade that occurred after the festivities.\" This example contains informative points at different locations (\"remember\" and \"ludicrous\"); conventional CNNs' ability to model nonlocal dependency is limited because of fixed-size filter widths. In ATTCONV, we can set t y = t x . The attentive context vector then accumulates all related parts together for a given position. In other words, our intra-context attentive convolution is able to connect all related spans together to form a comprehensive decision. This is a new way to broaden the scope of conventional filter widths: A filter now covers not only the local window, but also those spans that are related, but are beyond the scope of the window.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 65,
"text": "Figures 4(a)",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.3"
},
{
"text": "Comparison to Transformer. 2 The \"focus\" in ATTCONV corresponds to \"key\" and \"value\" in Transformer; that is, our versions of \"key\" and \"value\" are the same, coming from the context sentence. The \"query\" in Transformer corresponds to the \"source\" and \"beneficiary\" of ATTCONV; namely, our model has two perspectives to utilize the context: one acts as a real query (i.e., \"source\") to attend the context, the other (i.e., \"beneficiary\") takes the attentive con-text back to improve the learned representation of itself. If we reduce ATTCONV to unigram convolutional filters, it is pretty much a single Transformer layer (if we neglect the positional encoding in Transformer and unify the \"query-key-value\" and \"source-focus-beneficiary\" mechanisms).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.3"
},
{
"text": "We evaluate ATTCONV on sentence modeling in three scenarios: (i) Zero-context, that is, intracontext; the same input sentence acts as t x as well as t y ; (ii) Single-context, that is, textual entailment-hypothesis modeling with a single premise as the extra-context; and (iii) Multiplecontext, namely, claim verification-claim modeling with multiple extra-contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "All experiments share a common set-up. The input is represented using 300-dimensional publicly available Word2Vec (Mikolov et al., 2013) embeddings; out of vocabulary embeddings are randomly initialized. The architecture consists of the following four layers in sequence: embedding, attentive convolution, max-pooling, and logistic regression. The context-aware representation of t x is forwarded to the logistic regression layer. We use AdaGrad (Duchi et al., 2011) for training. Embeddings are fine-tuned during training. Hyperparameter values include: learning rate 0.01, hidden size 300, batch size 50, filter width 3.",
"cite_spans": [
{
"start": 114,
"end": 136,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 446,
"end": 466,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Common Set-up and Common Baselines",
"sec_num": "4.1"
},
{
"text": "All experiments are designed to explore comparisons in three aspects: (i) within ATTCONV, \"light\" vs. \"advanced\"; (ii) \"attentive convolution\" vs. \"attentive pooling\"/\"attention only\"; and (iii) \"attentive convolution\" vs. \"attentive RNN\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Common Set-up and Common Baselines",
"sec_num": "4.1"
},
{
"text": "To this end, we always report \"light\" and \"advanced\" ATTCONV performance and compare against five types of common baselines: (i) w/o context; (ii) w/o attention; (iii) w/o convolution: Similar to the Transformer's principle (Vaswani et al., 2017) , we discard the convolution operation in Equation (5) and forward the addition of the attentive context c x i and the h x i into a fully connected layer. To keep enough parameters, we stack in total four layers so that \"w/o convolution\" has the same size of parameters as light-ATTCONV; (iv) with attention: RNNs with attention and CNNs with attentive pooling; and (v) prior state of the art, typeset in italics. ",
"cite_spans": [
{
"start": 224,
"end": 246,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Common Set-up and Common Baselines",
"sec_num": "4.1"
},
{
"text": "We evaluate sentiment analysis on a Yelp benchmark released by Lin et al. 2017: review-star pairs in sizes 500K (train), 2,000 (dev), and 2,000 (test). Most text instances in this data set are long: 25%, 50%, 75% percentiles are 46, 81, and 125 words, respectively. The task is five-way classification: 1 to 5 stars. The measure is accuracy. We use this benchmark because the predominance of long texts lets us evaluate the system performance of encoding long-range context, and the system by Lin et al. is directly related to ATTCONV in intra-context scenario. Baselines. (i) w/o attention. Three baselines from Lin et al. (2017) : Paragraph Vector (Le and Mikolov, 2014 ) (unsupervised sentence representation learning), BiLSTM, and CNN. We also reimplement MultichannelCNN (Kim, 2014) , recognized as a simple but surprisingly strong sentence modeler. (ii) with attention. A vanilla \"Attentive-LSTM\" by Rockt\u00e4schel et al. (2016) . \"RNN Self-Attention\" (Lin et al., 2017) is directly comparable to ATTCONV: it also uses intracontext attention. \"CNN+internal attention\" (Adel and Sch\u00fctze, 2017) , an intra-context attention idea similar to, but less complicated than, Lin et al. (2017) . ABCNN & APCNN -CNNs with attentive pooling.",
"cite_spans": [
{
"start": 613,
"end": 630,
"text": "Lin et al. (2017)",
"ref_id": "BIBREF24"
},
{
"start": 650,
"end": 671,
"text": "(Le and Mikolov, 2014",
"ref_id": "BIBREF20"
},
{
"start": 776,
"end": 787,
"text": "(Kim, 2014)",
"ref_id": "BIBREF17"
},
{
"start": 906,
"end": 931,
"text": "Rockt\u00e4schel et al. (2016)",
"ref_id": "BIBREF36"
},
{
"start": 955,
"end": 973,
"text": "(Lin et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 1071,
"end": 1095,
"text": "(Adel and Sch\u00fctze, 2017)",
"ref_id": "BIBREF0"
},
{
"start": 1169,
"end": 1186,
"text": "Lin et al. (2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with Zero-context: Sentiment Analysis",
"sec_num": "4.2"
},
{
"text": "Results and Analysis. Table 3 shows that advanced-ATTCONV surpasses its \"light\" counterpart, and obtains significant improvement over the state of the art. In addition, ATTCONV surpasses attentive pooling (ABCNN&APCNN) with a big margin (>5%) and outperforms the representative attentive-LSTM (>4%).",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Sentence Modeling with Zero-context: Sentiment Analysis",
"sec_num": "4.2"
},
{
"text": "Furthermore, it outperforms the two selfattentive models: CNN+internal attention (Adel and Sch\u00fctze, 2017) and RNN Self-Attention (Lin et al., 2017) , which are specifically designed for single-sentence modeling. Adel and Sch\u00fctze (2017) generate an attention weight for each CNN hidden state by a linear transformation of the same hidden state, then compute weighted average over all hidden states as the text representation. Lin et al. (2017) extend that idea by generating a group of attention weight vectors, then RNN hidden states are averaged by those diverse weighted vectors, allowing extracting different aspects of the text into multiple vector representations. Both works are essentially weighted mean pooling, similar to the attentive pooling in Yin et al. (2016) and dos Santos et al. (2016) .",
"cite_spans": [
{
"start": 81,
"end": 105,
"text": "(Adel and Sch\u00fctze, 2017)",
"ref_id": "BIBREF0"
},
{
"start": 129,
"end": 147,
"text": "(Lin et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 212,
"end": 235,
"text": "Adel and Sch\u00fctze (2017)",
"ref_id": "BIBREF0"
},
{
"start": 425,
"end": 442,
"text": "Lin et al. (2017)",
"ref_id": "BIBREF24"
},
{
"start": 756,
"end": 773,
"text": "Yin et al. (2016)",
"ref_id": "BIBREF49"
},
{
"start": 782,
"end": 802,
"text": "Santos et al. (2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with Zero-context: Sentiment Analysis",
"sec_num": "4.2"
},
{
"text": "Next, we compare ATTCONV with Multichan-nelCNN, the strongest baseline system (\"w/o attention\"), for different length ranges to check whether ATTCONV can really encode long-range context effectively. We sort the 2,000 test instances by length, then split them into 10 groups, each consisting of 200 instances. Figure 5 shows performance of ATTCONV vs. MultichannnelCNN.",
"cite_spans": [],
"ref_spans": [
{
"start": 310,
"end": 318,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Sentence Modeling with Zero-context: Sentiment Analysis",
"sec_num": "4.2"
},
{
"text": "We observe that ATTCONV consistently outperforms MultichannelCNN for all lengths. Furthermore, the improvement over MultichannelCNN generally increases with length. This is evidence that ATTCONV more effectively models long text. This is likely because of ATTCONV's capability to encode broader context in its filters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with Zero-context: Sentiment Analysis",
"sec_num": "4.2"
},
{
"text": "Data Set. SCITAIL (Khot et al., 2018 ) is a textual entailment benchmark designed specifically for a real-world task: multi-choice question answering. All hypotheses t x were obtained by rephrasing (question, correct answer) pairs into single sentences, and premises t y are relevant Web sentences retrieved by an information retrieval method. Then the task is to determine whether a hypothesis is true or not, given a premise as context. All (t x , t y ) pairs are annotated via crowdsourcing. Accuracy is reported. Table 1 shows examples and Table 4 gives statistics. By this construction, a substantial performance improvement on SCITAIL is equivalent to a better QA performance (Khot et al., 2018) . The hypothesis t x is the target sentence, and the premise t y acts as its context.",
"cite_spans": [
{
"start": 18,
"end": 36,
"text": "(Khot et al., 2018",
"ref_id": "BIBREF16"
},
{
"start": 682,
"end": 701,
"text": "(Khot et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 517,
"end": 551,
"text": "Table 1 shows examples and Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Sentence Modeling with a Single Context: Textual Entailment",
"sec_num": "4.3"
},
{
"text": "Baselines. Apart from the common baselines (see Section 4.1), we include systems covered by Khot et al. (2018) : (i) n-gram Overlap: An overlap baseline, considering lexical granularity such as unigrams, one-skip bigrams, and oneskip trigrams. (ii) Decomposable Attention Model (Decomp-Att) (Parikh et al., 2016) : Explore attention mechanisms to decompose the task into subtasks to solve in parallel. (iii) Enhanced LSTM (Chen et al., 2017b) : Enhance LSTM by taking into account syntax and semantics from parsing information. (iv) DGEM (Khot et al., 2018) : A decomposed graph entailment model, the current state-of-the-art. Table 5 presents results on SCITAIL. (i) Within ATTCONV, \"advanced\" beats \"light\" by 1.1%; (ii) \"w/o convolution\" and attentive pooling (i.e., ABCNN & APCNN) get lower performances by 3%-4%; (iii) More complicated attention mechanisms equipped into LSTM (e.g., \"attentive-LSTM\" and \"enhanced-LSTM\") perform even worse.",
"cite_spans": [
{
"start": 92,
"end": 110,
"text": "Khot et al. (2018)",
"ref_id": "BIBREF16"
},
{
"start": 291,
"end": 312,
"text": "(Parikh et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 422,
"end": 442,
"text": "(Chen et al., 2017b)",
"ref_id": "BIBREF4"
},
{
"start": 538,
"end": 557,
"text": "(Khot et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 627,
"end": 634,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Sentence Modeling with a Single Context: Textual Entailment",
"sec_num": "4.3"
},
{
"text": "Error Analysis. To better understand the ATTCONV in SCITAIL, we study some error cases listed in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Modeling with a Single Context: Textual Entailment",
"sec_num": "4.3"
},
{
"text": "Language conventions. Pair #1 uses sequential commas (i.e., in \"the egg, larva, pupa, and adult\") or a special symbol sequence (i.e., in \"egg \u2212> larva \u2212> pupa \u2212> adult\") to form a set or sequence; pair #2 has \"A (or B)\" to express the equivalence of A and B. This challenge is expected to be handled by DNNs with specific training signals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with a Single Context: Textual Entailment",
"sec_num": "4.3"
},
{
"text": "Knowledge beyond the text t y . In #3, \"because smaller amounts of water evaporate in the cool morning\" cannot be inferred from the premise t y directly. The main challenge in #4 is to distinguish \"weight\" from \"force,\" which requires background physical knowledge that is beyond the presented text here and beyond the expressivity of word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with a Single Context: Textual Entailment",
"sec_num": "4.3"
},
{
"text": "Complex discourse relation. The premise in #5 has an \"or\" structure. In #6, the inserted phrase \"with about 16,000 species\" makes the connection between \"nonvascular plants\" and \"the mosses, liverworts, and hornworts\" hard to detect. Both instances require the model to decode the discourse relation. ATTCONV on SNLI. Table 7 shows the comparison. We observe that: (i) classifying hypotheses without looking at premises, that is, \"w/o context\" baseline, results in a large improvement over the \"majority baseline.\" This verifies the strong bias in the hypothesis construction of the SNLI data set (Gururangan et al., 2018; Poliak et al., 2018) . (ii) ATTCONV (advanced) surpasses # (Premise t y , Hypothesis t x ) Pair G/P Challenge 1 (t y ) These insects have 4 life stages, the egg, larva, pupa, and adult. 1/0 language conventions (t x ) The sequence egg \u2212> larva \u2212> pupa \u2212> adult shows the life cycle of some insects.",
"cite_spans": [
{
"start": 597,
"end": 622,
"text": "(Gururangan et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 623,
"end": 643,
"text": "Poliak et al., 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 318,
"end": 325,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Sentence Modeling with a Single Context: Textual Entailment",
"sec_num": "4.3"
},
{
"text": "(t y ) . . . the notochord forms the backbone (or vertebral column). 1/0 language conventions (t x ) Backbone is another name for the vertebral column. 3 (t y ) Water lawns early in the morning . . . prevent evaporation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "1/0 beyond text (t x ) Watering plants and grass in the early morning is a way to conserve water because smaller amounts of water evaporate in the cool morning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "(t y ) . . . the SI unit . . . for force is the Newton (N) and is defined as (kg\u2022m/s \u22122 ). 0/1 beyond text (t x ) Newton (N) is the SI unit for weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "(t y ) Heterotrophs get energy and carbon from living plants or animals (consumers) or from dead organic matter (decomposers). 0/1 discourse relation (t x ) Mushrooms get their energy from decomposing dead organisms. 6 (t y ) . . . are a diverse assemblage of three phyla of nonvascular plants, with 1/0 discourse relation about 16,000 species, that includes the mosses, liverworts, and hornworts. (t x ) Moss is best classified as a nonvascular plant. Table 6 : Error cases of ATTCONV in SCITAIL. \". . . \": truncated text. \"G/P\": gold/predicted label. (Mou et al., 2016) 3.5M 82.1 NES (Munkhdalai and Yu, 2017) 6.3M 84.8 with attention",
"cite_spans": [
{
"start": 553,
"end": 571,
"text": "(Mou et al., 2016)",
"ref_id": "BIBREF28"
},
{
"start": 586,
"end": 621,
"text": "(Munkhdalai and Yu, 2017) 6.3M 84.8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 453,
"end": 460,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "Attentive-LSTM (Rockt\u00e4schel) 250K 83.5 Self-Attentive (Lin et al., 2017) 95M 84.4 Match-LSTM (Wang and Jiang) 1.9M 86.1 LSTMN (Cheng et al., 2016) 3.4M 86.3 Decomp-Att (Parikh) 580K 86.8 Enhanced LSTM (Chen et al., 2017b) 7.7M 88.6 ABCNN (Yin et al., 2016) 834K 83.7 APCNN (dos Santos et al., 2016) 360K 83.9 ATTCONV -light 360K 86.3 w/o convolution 360K 84.9 ATTCONV -advanced 900K 87.8 State-of-the-art (Peters et al., 2018) 8M 88.7 all \"w/o attention\" baselines and \"with attention\" CNN baselines (i.e., attentive pooling), obtaining a performance (87.8%) that is close to the state of the art (88.7%).",
"cite_spans": [
{
"start": 54,
"end": 72,
"text": "(Lin et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 126,
"end": 146,
"text": "(Cheng et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 201,
"end": 221,
"text": "(Chen et al., 2017b)",
"ref_id": "BIBREF4"
},
{
"start": 238,
"end": 256,
"text": "(Yin et al., 2016)",
"ref_id": "BIBREF49"
},
{
"start": 405,
"end": 426,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "We also report the parameter size in SNLI as most baseline systems did. Table 7 shows that, in comparison to these baselines, our ATTCONV (light and advanced) has a more limited number of parameters, yet its performance is competitive.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "Visualization. In Figure 6 , we visualize the attention mechanisms explored in attentive con-volution ( Figure 6(a) ) and attentive pooling ( Figure 6(b) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 6",
"ref_id": null
},
{
"start": 104,
"end": 115,
"text": "Figure 6(a)",
"ref_id": null
},
{
"start": 142,
"end": 153,
"text": "Figure 6(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "Figure 6(a) explores the visualization of two kinds of features learned by light ATTCONV in SNLI data set (most are short sentences with rich phrase-level reasoning): (i) e i,j in Equation (1) (after softmax), which shows the attention distribution over context t y by the hidden state h x i in sentence t x ; (ii) h x i,new in Equation (5) for i = 1, 2, \u2022 \u2022 \u2022 , |t x |; it shows the contextaware word features in t x . By the two visualized features, we can identify which parts of the context t y are more important for a word in sentence t x , and a max-pooling, over those contextdriven word representations, selects and forwards dominant (word, left context , right context , att context ) combinations to the final decision maker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "Figure 6(a) shows the features 3 of sentence t x = \"A dog jumping for a Frisbee in the snow\" conditioned on the context t y = \"An animal is outside in the cold weather, playing with a plastic toy.\" Observations: (i) The right figure shows that the attention mechanism successfully aligns some cross-sentence phrases that are informative to the textual entailment problem, such as \"dog\" to \"animal\" (i.e., c x dog \u2248 \"animal\"), \"Frisbee\" to \"plastic toy\" and \"playing\" (i.e., c x F risbee \u2248 \"plastic toy\"+\"playing\"); (ii) The left figure shows a max-pooling over the generated features of filter_1 and filter_2 will focus on the contextaware phrases (A, dog, jumping, c x dog ) and (a, (a) Visualization for features generated by ATTCONV's filters on sentence t x and t y . A max-pooling, over filter_1, locates the phrase (A, dog, jumping, c x dog ), and locates the phrase (a, Frisbee, in, c x F risbee ) via filter_2. \"c x dog \" (resp. c x F ris. )-the attentive context of \"dog\" (resp. \"Frisbee\") in t x -mainly comes from \"animal\" (resp. \"toy\" and \"playing\") in t y . (b) Attention visualization for attentive pooling (ABCNN). Based on the words in t x and t y , first, a convolution layer with filter width 3 outputs hidden states for each sentence, then each hidden state will obtain an attention weight for how well this hidden state matches towards all the hidden states in the other sentence, and finally all hidden states in each sentence will be weighted and summed up as the sentence representation. This visualization shows that the spans \"dog jumping for\" and \"in the snow\" in t x and the spans \"animal is outside\" and \"in the cold\" in t y are most indicative to the entailment reasoning. Figure 6 : Attention visualization for attentive convolution (top) and attentive pooling (bottom) between sentence t x = \"A dog jumping for a Frisbee in the snow\" (left) and sentence t y = \"An animal is outside in the cold weather, playing with a plastic toy\" (right).",
"cite_spans": [],
"ref_spans": [
{
"start": 1702,
"end": 1710,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "Frisbee, in, c x F risbee ) respectively; the two phrases are crucial to the entailment reasoning for this (t y , t x ) pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "Figure 6(b) shows the phrase-level (i.e., each consecutive trigram) attentions after the convolution operation. As Figure 3 shows, a subsequent pooling step will weight and sum up those phraselevel hidden states as an overall sentence representation. So, even though some phrases such as \"in the snow\" in t x and \"in the cold\" in t y show importance in this pair instance, the final sentence representation still (i) lacks a fine-grained phraseto-phrase reasoning, and (ii) underestimates some indicative phrases such as \"A dog\" in t x and \"An animal\" in t y .",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "Briefly, attentive convolution first performs phrase-to-phrase, inter-sentence reasoning, then composes features; attentive pooling composes #SUPPORTED #REFUTED #NEI train 80,035 29,775 35,639 dev 3,333 3,333 3,333 test 3,333 3,333 3,333 Table 8 : Statistics of claims in the FEVER data set.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 260,
"text": "#REFUTED #NEI train 80,035 29,775 35,639 dev 3,333 3,333 3,333 test 3,333 3,333 3,333 Table 8",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "phrase features as sentence representations, then performs reasoning. Intuitively, attentive convolution better fits the way humans conduct entailment reasoning, and our experiments validate its superiority-it is the hidden states of the aligned phrases rather than their matching scores that support better representation learning and decision-making. The comparisons in both SCITAIL and SNLI show that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "\u2022 CNNs with attentive convolution (i.e., ATTCONV) outperform the CNNs with attentive pooling (i.e., ABCNN and APCNN); \u2022 Some competitors got over-tuned on SNLI while demonstrating mediocre performance in SCITAIL-a real-world NLP task. Our system ATTCONV shows its robustness in both benchmark data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "Data Set. For this task, we use FEVER (Thorne et al., 2018) ; it infers the truthfulness of claims by extracted evidence. The claims in FEVER were manually constructed from the introductory sections of about 50K popular Wikipedia articles in the June 2017 dump. Claims have 9.4 tokens on average. Table 8 lists the claim statistics. In addition to claims, FEVER also provides a Wikipedia corpus of approximately 5.4 million articles, from which gold evidences are gathered and provided. Figure 7 shows the distributions of sentence sizes in FEVER's ground truth evidence set (i.e., the context size in our experimental set-up). We can see that roughly 28% of evidence instances cover more than one sentence and roughly 16% cover more than two sentences.",
"cite_spans": [
{
"start": 38,
"end": 59,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 8",
"ref_id": null
},
{
"start": 487,
"end": 495,
"text": "Figure 7",
"ref_id": "FIGREF9"
}
],
"eq_spans": [],
"section": "Sentence Modeling with Multiple Contexts: Claim Verification",
"sec_num": "4.4"
},
{
"text": "Each claim is labeled as SUPPORTED, RE-FUTED, or NOTENOUGHINFO (NEI) given the gold evidence. The standard FEVER task also explores the performance of evidence extraction, evaluated by F 1 between extracted evidence and gold evidence. This work focuses on the claim entailment part, assuming the evidences are provided (extracted or gold). More specifically, we treat a claim as t x , and its evidence sentences as context t y . This task has two evaluations: (i) ALLaccuracy of claim verification regardless of the validness of evidence; (ii) SUBSET-verification accuracy of a subset of claims, in which the gold evidence for SUPPORTED and REFUTED claims must be fully retrieved. We use the official evaluation toolkit. 4 Set-ups. (i) We adopt the same retrieved evidence set (i.e, contexts t y ) as Thorne et al. (2018) : top-5 most relevant sentences from top-5 retrieved wiki pages by a document retriever (Chen et al., 2017a) . The quality of this evidence set against the ground truth is: 44.22 (recall), 10.44 (precision), 16.89 (F 1 ) on dev, and 45.89 (recall), 10.79 (precision), 17.47 (F 1 ) on test. This set-up challenges our system with potentially unrelated or even misleading context. (ii) We use the ground truth evidence as context. This lets us determine how far our ATTCONV can go for this claim verification problem once the accurate evidence is given.",
"cite_spans": [
{
"start": 801,
"end": 821,
"text": "Thorne et al. (2018)",
"ref_id": "BIBREF41"
},
{
"start": 910,
"end": 930,
"text": "(Chen et al., 2017a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with Multiple Contexts: Claim Verification",
"sec_num": "4.4"
},
{
"text": "Baselines. We first include the two systems explored by Thorne et al. (2018) : (i) MLP: A multilayer perceptron baseline with a single hidden layer, based on tf-idf cosine similarity between the claim and the evidence (Riedel et al., 2017) ; (ii) Decomp-Att (Parikh et al., 2016): A decomposable attention model that is tested in SCITAIL and SNLI before. Note that both baselines first relied on an information retrieval system to extract the top-5 relevant sentences from the retrieved top-5 wiki pages as evidence for claims, then concatenated all evidence sentences as a longer context for a claim. Table 9 : Performance on dev and test of FEVER. In \"gold evi.\" scenario, ALL SUBSET are the same.",
"cite_spans": [
{
"start": 56,
"end": 76,
"text": "Thorne et al. (2018)",
"ref_id": "BIBREF41"
},
{
"start": 218,
"end": 239,
"text": "(Riedel et al., 2017)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 602,
"end": 609,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Modeling with Multiple Contexts: Claim Verification",
"sec_num": "4.4"
},
{
"text": "We then consider two variants of our ATTCONV in dealing with modeling of t x with variable-size context t y . (i) Context-wise: we first use all evidence sentences one by one as context t y to guide the representation learning of the claim t x , generating a group of context-aware representation vectors for the claim, then we do element-wise max-pooling over this vector group as the final representation of the claim. (ii) Context-conc: concatenate all evidence sentences as a single piece of context, then model the claim based on this context. This is the same preprocessing step as Thorne et al. (2018) did.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with Multiple Contexts: Claim Verification",
"sec_num": "4.4"
},
{
"text": "Results. Table 9 compares our ATTCONV in different set-ups against the baselines. First, ATTCONV surpasses the top competitor \"Decomp-Att,\" reported in Thorne et al. (2018) , with big margins in dev (ALL: 62.26 vs. 52.09) and test (ALL: 61.03 vs. 50.91). In addition, \"advanced-ATTCONV\" consistently outperforms its \"light\" counterpart. Moreover, ATTCONV surpasses attentive pooling (i.e., ABCNN & APCNN) and \"attentive-LSTM\" by >10% in ALL, >6% in SUB and >8% in \"gold evi.\" Figure 8 further explores the fine-grained performance of ATTCONV for different sizes of gold evidence (i.e., different sizes of context t y ). The system shows comparable performances for sizes 1 and 2. Even for context sizes larger than 5, it only drops by 5%. These experiments on claim verification clearly show the effectiveness of ATTCONV in sentence modeling with variable-size context. This should be attributed to the attention mechanism in ATTCONV, which enables a word or a phrase in the claim t x to \"see\" and accumulate all related clues even if those clues are scattered across multiple contexts t y .",
"cite_spans": [
{
"start": 152,
"end": 172,
"text": "Thorne et al. (2018)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 9",
"ref_id": null
},
{
"start": 476,
"end": 484,
"text": "Figure 8",
"ref_id": "FIGREF10"
}
],
"eq_spans": [],
"section": "Sentence Modeling with Multiple Contexts: Claim Verification",
"sec_num": "4.4"
},
{
"text": "Error Analysis. We do error analysis for \"retrieved evidence\" scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with Multiple Contexts: Claim Verification",
"sec_num": "4.4"
},
{
"text": "Error case #1 is due to the failure of fully retrieving all evidence. For example, a successful support of the claim \"Weekly Idol has a host born in the year 1978\" requires the information composition from three evidence sentences, two from the wiki article \"Weekly Idol,\" and one from \"Jeong Hyeong-don.\" However, only one of them is retrieved in the top-5 candidates. Our system predicts REFUTED. This error is more common in instances for which no evidence is retrieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with Multiple Contexts: Claim Verification",
"sec_num": "4.4"
},
{
"text": "Error case #2 is due to the insufficiency of representation learning. Consider the wrong claim \"Corsica belongs to Italy\" (i.e., in REFUTED class). Even though good evidence is retrieved, the system is misled by noise evidence: \"It is located . . . west of the Italian Peninsula, with the nearest land mass being the Italian island . . . \".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with Multiple Contexts: Claim Verification",
"sec_num": "4.4"
},
{
"text": "Error case #3 is due to the lack of advanced data preprocessing. For a human, it is very easy to \"refute\" the claim \"Telemundo is an English-language television network\" by the evidence \"Telemundo is an American Spanish-language terrestrial television . . . \" (from the \"Telemundo\" wikipage), by checking the keyphrases: \"Spanish-language\" vs. \"English-language.\" Unfortunately, both tokens are unknown words in our system; as a result, they do not have informative embeddings. A more careful data preprocessing is expected to help.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Modeling with Multiple Contexts: Claim Verification",
"sec_num": "4.4"
},
{
"text": "We presented ATTCONV, the first work that enables CNNs to acquire the attention mechanism commonly used in RNNs. ATTCONV combines the strengths of CNNs with the strengths of the RNN attention mechanism. On the one hand, it makes broad and rich context available for prediction, either context from external inputs (extra-context) or internal inputs (intra-context). On the other hand, it can take full advantage of the strengths of convolution: It is more ordersensitive than attention in RNNs and local-context information can be powerfully and efficiently modeled through convolution filters. Our experiments demonstrate the effectiveness and flexibility of ATTCONV when modeling sentences with variable-size context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "5"
},
{
"text": "Our \"source-focus-beneficiary\" mechanism was inspired byAdel and Sch\u00fctze (2017).Vaswani et al. (2017) later published the Transformer model, which has a similar \"querykey-value\" mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For simplicity, we show 2 out of 300 ATTCONV filters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/sheffieldnlp/feverscorer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We gratefully acknowledge funding for this work by the European Research Council (ERC #740516). We would like to thank the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Exploring different dimensions of attention for uncertainty detection",
"authors": [
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "22--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heike Adel and Hinrich Sch\u00fctze. 2017. Exploring different dimensions of attention for uncertainty detection. In Proceedings of EACL, pages 22-34, Valencia, Spain.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Pro- ceedings of ICLR, San Diego, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural lan- guage inference. In Proceedings of EMNLP, pages 632-642, Lisbon, Portugal.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reading Wikipedia to answer open-domain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1870--1879",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017a. Reading Wikipedia to answer open-domain questions. In Proceedings of ACL, pages 1870-1879, Vancouver, Canada.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enhanced LSTM for natural language inference",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1657--1668",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017b. Enhanced LSTM for natural language inference. In Pro- ceedings of ACL, pages 1657-1668, Vancouver, Canada.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory-networks for machine reading",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "551--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of EMNLP, pages 551-561, Austin, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [
"P"
],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recognizing Textual Entailment: Models and Applications",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Sammons",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zanzotto",
"suffix": ""
}
],
"year": 2013,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing Textual Entailment: Models and Applications. Synthesis Lectures on Human Language Tech- nologies. Morgan & Claypool.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learn- ing and stochastic optimization. Journal of Ma- chine Learning Research, 12:2121-2159.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Finding structure in time",
"authors": [
{
"first": "Jeffrey",
"middle": [
"L"
],
"last": "Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive Science",
"volume": "14",
"issue": "2",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "1243--1252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of ICML, pages 1243-1252, Sydney, Australia.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generating sequences with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2013. Generating sequences with re- current neural networks. CoRR, abs/1308.0850.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural turing machines. CoRR",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. CoRR, abs/1410.5401.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Annotation artifacts in natural language inference data",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "107--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of NAACL-HLT, pages 107-112, New Orleans, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Kocisk\u00fd",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tom\u00e1s Kocisk\u00fd, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teach- ing machines to read and comprehend. In Pro- ceedings of NIPS, pages 1693-1701, Montreal, Canada.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A convolutional neural network for modelling sentences",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "655--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural net- work for modelling sentences. In Proceedings of ACL, pages 655-665, Baltimore, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "SciTaiL: A textual entailment dataset from science question answering",
"authors": [
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "5189--5197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTaiL: A textual entailment dataset from science question answering. In Proceed- ings of AAAI, pages 5189-5197, New Orleans, USA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP, pages 1746-1751, Doha, Qatar.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Structured attention networks",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Denton",
"suffix": ""
},
{
"first": "Luong",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. 2017. Structured atten- tion networks. In Proceedings of ICLR, Toulon, France.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Ask me anything: Dynamic memory networks for natural language processing",
"authors": [
{
"first": "Ankit",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ozan",
"middle": [],
"last": "Irsoy",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Ondruska",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Ishaan",
"middle": [],
"last": "Gulrajani",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "1378--1387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language process- ing. In Proceedings of ICML, pages 1378-1387, New York City, USA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of ICML, pages 1188-1196, Beijing, China.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Gradient-based learning applied to document recognition",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Haffner",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the IEEE",
"volume": "86",
"issue": "11",
"pages": "2278--2324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A hierarchical neural autoencoder for paragraphs and documents",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1106--1115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In Proceedings of ACL, pages 1106-1115, Beijing, China.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention strategies for multi-source sequenceto-sequence learning",
"authors": [
{
"first": "Jindrich",
"middle": [],
"last": "Libovick\u00fd",
"suffix": ""
},
{
"first": "Jindrich",
"middle": [],
"last": "Helcl",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "196--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jindrich Libovick\u00fd and Jindrich Helcl. 2017. At- tention strategies for multi-source sequence- to-sequence learning. In Proceedings of ACL, pages 196-202, Vancouver, Canada.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A structured selfattentive sentence embedding",
"authors": [
{
"first": "Zhouhan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Minwei",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "C\u00edcero",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhouhan Lin, Minwei Feng, C\u00edcero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self- attentive sentence embedding. In Proceedings of ICLR, Toulon, France.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of EMNLP, pages 1412-1421, Lisbon, Portugal.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Neural variational inference for text processing",
"authors": [
{
"first": "Yishu",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "1727--1736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Proceedings of ICML, pages 1727-1736, New York City, USA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Dis- tributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111-3119, Lake Tahoe, USA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Natural language inference by tree-based convolution and heuristic matching",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Men",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "130--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language in- ference by tree-based convolution and heuristic matching. In Proceedings of ACL, pages 130-136, Berlin, Germany.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Neural semantic encoders",
"authors": [
{
"first": "Tsendsuren",
"middle": [],
"last": "Munkhdalai",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "397--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsendsuren Munkhdalai and Hong Yu. 2017. Neural semantic encoders. In Proceedings of EACL, pages 397-407, Valencia, Spain.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Abstractive text summarization using sequence-to-sequence rnns and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, C\u00edcero Nogueira dos Santos, \u00c7aglar G\u00fcl\u00e7ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Pro- ceedings of CoNLL, pages 280-290, Berlin, Germany.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ankur",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "2249--2255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankur P. Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of EMNLP, pages 2249-2255, Austin, USA.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP, pages 1532-1543, Doha, Qatar.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextu- alized word representations. In Proceedings of NAACL-HLT, pages 2227-2237, New Orleans, USA.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Hypothesis only baselines in natural language inference",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Aparajita",
"middle": [],
"last": "Haldar",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of *SEM",
"volume": "",
"issue": "",
"pages": "180--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of *SEM, pages 180-191, New Orleans, USA.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A simple but tough-to-beat baseline for the fake news challenge stance detection task",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Georgios",
"middle": [
"P"
],
"last": "Spithourakis",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Riedel, Isabelle Augenstein, Georgios P. Spithourakis, and Sebastian Riedel. 2017. A simple but tough-to-beat baseline for the fake news challenge stance detection task. CoRR, abs/1707.03264.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Reasoning about entailment with neural attention",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u1ef3",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In Proceedings of ICLR, San Juan, Puerto Rico.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Attentive pooling networks",
"authors": [
{
"first": "C\u00edcero",
"middle": [],
"last": "Nogueira Dos Santos",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00edcero Nogueira dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pool- ing networks. CoRR, abs/1602.03609.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Min Joon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In Proceedings of ICLR, Toulon, France.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Neural responding machine for shorttext conversation",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1577--1586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short- text conversation. In Proceedings of ACL, pages 1577-1586, Beijing, China.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Training very deep networks",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Rupesh Kumar Srivastava",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Greff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "2377--2385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rupesh Kumar Srivastava, Klaus Greff, and J\u00fcrgen Schmidhuber. 2015. Training very deep networks. In Proceedings of NIPS, pages 2377-2385, Montreal, Canada.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "FEVER: A large-scale dataset for fact extraction and verification",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Arpit",
"middle": [],
"last": "Mittal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "809--819",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: A large-scale dataset for fact extraction and verification. In Proceedings of NAACL- HLT, pages 809-819, New Orleans, USA.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Proceedings of NIPS, pages 6000-6010, Long Beach, USA.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Learning natural language inference with LSTM",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1442--1451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuohang Wang and Jing Jiang. 2016. Learn- ing natural language inference with LSTM. In Proceedings of NAACL-HLT, pages 1442-1451, San Diego, USA.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Machine comprehension using match-LSTM and answer pointer",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuohang Wang and Jing Jiang. 2017. Machine comprehension using match-LSTM and an- swer pointer. In Proceedings of ICLR, Toulon, France.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Gated selfmatching networks for reading comprehension and question answering",
"authors": [
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "189--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017a. Gated self- matching networks for reading comprehension and question answering. In Proceedings of ACL, pages 189-198, Vancouver, Canada.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Bilateral multi-perspective matching for natural language sentences",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Hamza",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "4144--4150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang, Wael Hamza, and Radu Florian. 2017b. Bilateral multi-perspective matching for natural language sentences. In Proceedings of IJCAI, pages 4144-4150, Melbourne, Australia.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Dynamic memory networks for visual and textual question answering",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "2397--2406",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In Pro- ceedings of ICML, pages 2397-2406, New York City, USA.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Dynamic coattention networks for question answering",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In Proceedings of ICLR, Toulon, France.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "ABCNN: Attention-based convolutional neural network for modeling sentence pairs",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "TACL",
"volume": "4",
"issue": "",
"pages": "259--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin, Hinrich Sch\u00fctze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-based convolutional neural network for modeling sen- tence pairs. TACL, 4:259-272.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Transactions of the Association for Computational Linguistics, vol. 6, pp. 687-702, 2018. Action Editor: Slav Petrov.Submission batch: 6/2018; Revision batch: 10/2018; Published 12/2018. c 2018 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "premise, modeled as context t y Plant cells have structures that animal cells lack. 0 Animal cells do not have cell walls. 1 The cell wall is not a freestanding structure. 0 Plant cells possess a cell wall, animals never. 1",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Verify claims in contexts.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "A simplified illustration of attention mechanism in RNNs.",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "ATTCONV models sentence t x with context t y .",
"num": null
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"text": "(b). It differs from the light version in three ways: (i) attention source is learned by function f mgran (H x ), feature map H x of t x acting as input; (ii) attention focus is learned by function f mgran (H y ), feature map H y of context t y acting as input; and (iii) attention beneficiary is learned by function f bene (H x ), H x acting as input. Both functions f mgran () and f bene () are based on a gated convolutional function f gconv ():",
"num": null
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"text": "ATTCONV vs. MultichannelCNN for groups of Yelp text with ascending text lengths. ATTCONV performs more robustly across different lengths of text.",
"num": null
},
"FIGREF9": {
"type_str": "figure",
"uris": null,
"text": "Distribution of #sentence in FEVER evidence.",
"num": null
},
"FIGREF10": {
"type_str": "figure",
"uris": null,
"text": "Fine-grained ATTCONV performance given variable-size golden FEVER evidence as claim's context.",
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Multi-granular alignments required in textual entailment.",
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "System comparison of sentiment analysis on Yelp. Significant improvements over state of the art are marked with * (test of equal proportions, p < 0.05).",
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td/><td/><td>systems</td><td>acc</td></tr><tr><td/><td/><td>Majority Class</td><td>60.4</td></tr><tr><td>w/o</td><td>attention</td><td>w/o Context Bi-LSTM NGram model</td><td>65.1 69.5 70.6</td></tr><tr><td/><td/><td>Bi-CNN</td><td>74.4</td></tr><tr><td/><td/><td colspan=\"2\">Enhanced LSTM 70.6</td></tr><tr><td>with</td><td>attention</td><td colspan=\"2\">Attentive-LSTM 71.5 Decomp-Att 72.3 DGEM 77.3 APCNN 75.2</td></tr><tr><td/><td/><td>ABCNN</td><td>75.8</td></tr><tr><td colspan=\"3\">ATTCONV-light</td><td>78.1</td></tr><tr><td/><td colspan=\"2\">w/o convolution</td><td>75.1</td></tr><tr><td colspan=\"3\">ATTCONV-advanced</td><td>79.2</td></tr></table>",
"html": null,
"text": "Statistics of SCITAIL data set.",
"num": null
},
"TABREF7": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "ATTCONV vs. baselines on SCITAIL.",
"num": null
},
"TABREF9": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Performance comparison on SNLI test. Ensemble systems are not included.",
"num": null
}
}
}
}