ACL-OCL / Base_JSON /prefixQ /json /Q16 /Q16-1019.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q16-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:07:05.599601Z"
},
"title": "ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": "",
"affiliation": {},
"email": "wenpeng@cis.lmu.de"
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM",
"location": {
"addrLine": "Watson Yorktown Heights",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM",
"location": {
"addrLine": "Watson Yorktown Heights",
"region": "NY",
"country": "USA"
}
},
"email": "zhou@us.ibm.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence's representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https://github.com/ yinwenpeng/Answer_Selection.",
"pdf_parse": {
"paper_id": "Q16-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence's representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https://github.com/ yinwenpeng/Answer_Selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS) (Yu et al., 2014; Feng et al., 2015) , paraphrase identification (PI) (Madnani et al., 2012; Yin and Sch\u00fctze, 2015a) , textual entailment (TE) (Marelli et al., 2014a; Bowman et al., 2015a) etc.",
"cite_spans": [
{
"start": 101,
"end": 118,
"text": "(Yu et al., 2014;",
"ref_id": "BIBREF55"
},
{
"start": 119,
"end": 137,
"text": "Feng et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 171,
"end": 193,
"text": "(Madnani et al., 2012;",
"ref_id": "BIBREF32"
},
{
"start": 194,
"end": 217,
"text": "Yin and Sch\u00fctze, 2015a)",
"ref_id": "BIBREF53"
},
{
"start": 244,
"end": 267,
"text": "(Marelli et al., 2014a;",
"ref_id": "BIBREF33"
},
{
"start": 268,
"end": 289,
"text": "Bowman et al., 2015a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "s0 how much did Waterboy gross? s + 1 the movie earned $161.5 million s \u2212 1 this was Jerry Reed's final film appearance PI s0 she struck a deal with RH to pen a book today s + 1 she signed a contract with RH to write a book s \u2212 1 she denied today that she struck a deal with RH TE s0 an ice skating rink placed outdoors is full of people s + 1 a lot of people are in an ice skating park s \u2212 1 an ice skating rink placed indoors is full of people Most prior work derives each sentence's representation separately, rarely considering the impact of the other sentence. This neglects the mutual influence of the two sentences in the context of the task. It also contradicts what humans do when comparing two sentences. We usually focus on key parts of one sentence by extracting parts from the other sentence that are related by identity, synonymy, antonymy and other relations. Thus, human beings model the two sentences together, using the content of one sentence to guide the representation of the other. Figure 1 demonstrates that each sentence of a pair partially determines which parts of the other sentence we must focus on. For AS, correctly answering s 0 requires attention on \"gross\": s + 1 contains a corresponding unit (\"earned\") while s \u2212 1 does not. For PI, focus should be removed from \"today\" to correctly recognize < s 0 , s + 1 > as paraphrases and < s 0 , s \u2212 1 > as non-paraphrases. For TE, we need to focus on \"full of people\" (to recognize TE for < s 0 , s + 1 >) and on \"outdoors\" / \"indoors\" (to recognize non-TE for < s 0 , s \u2212 1 >). These examples show the need for an architecture that computes different representations of s i for different s 1\u2212i (i \u2208 {0, 1}). Convolutional Neural Networks (CNNs) (LeCun et al., 1998) are widely used to model sentences (Kalchbrenner et al., 2014; Kim, 2014) and sentence pairs (Socher et al., 2011; Yin and Sch\u00fctze, 2015a) , especially in classification tasks. CNNs are supposed to be good at extracting robust and abstract features of input. This work presents the ABCNN, an attention-based convolutional neural network, that has a powerful mechanism for modeling a sentence pair by taking into account the interdependence between the two sentences. The ABCNN is a general architecture that can handle a wide variety of sentence pair modeling tasks.",
"cite_spans": [
{
"start": 1722,
"end": 1742,
"text": "(LeCun et al., 1998)",
"ref_id": "BIBREF28"
},
{
"start": 1778,
"end": 1805,
"text": "(Kalchbrenner et al., 2014;",
"ref_id": "BIBREF25"
},
{
"start": 1806,
"end": 1816,
"text": "Kim, 2014)",
"ref_id": "BIBREF26"
},
{
"start": 1836,
"end": 1857,
"text": "(Socher et al., 2011;",
"ref_id": "BIBREF43"
},
{
"start": 1858,
"end": 1881,
"text": "Yin and Sch\u00fctze, 2015a)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [
{
"start": 1004,
"end": 1012,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "AS",
"sec_num": null
},
{
"text": "Some prior work proposes simple mechanisms that can be interpreted as controlling varying attention; e.g., Yih et al. (2013) employ word alignment to match related parts of the two sentences. In contrast, our attention scheme based on CNNs models relatedness between two parts fully automatically. Moreover, attention at multiple levels of granularity, not only at word level, is achieved as we stack multiple convolution layers that increase abstraction.",
"cite_spans": [
{
"start": 107,
"end": 124,
"text": "Yih et al. (2013)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AS",
"sec_num": null
},
{
"text": "Prior work on attention in deep learning (DL) mostly addresses long short-term memory networks (LSTMs) (Hochreiter and Schmidhuber, 1997) . LSTMs achieve attention usually in a word-to-word scheme, and word representations mostly encode the whole context within the sentence Rockt\u00e4schel et al., 2016) . It is not clear whether this is the best strategy; e.g., in the AS example in Figure 1 , it is possible to determine that \"how much\" in s 0 matches \"$161.5 million\" in s 1 without taking the entire sentence contexts into account. This observation was also investigated by Yao et al. (2013b) where an information retrieval system retrieves sentences with tokens labeled as DATE by named entity recognition or as CD by POS tagging if there is a \"when\" question. However, labels or POS tags require extra tools. CNNs benefit from incorporating attention into representations of local phrases detected by filters; in contrast, LSTMs encode the whole context to form attention-based word representations -a strategy that is more complex than the CNN strategy and (as our experiments suggest) performs less well for some tasks.",
"cite_spans": [
{
"start": 103,
"end": 137,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF20"
},
{
"start": 275,
"end": 300,
"text": "Rockt\u00e4schel et al., 2016)",
"ref_id": "BIBREF40"
},
{
"start": 575,
"end": 593,
"text": "Yao et al. (2013b)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [
{
"start": 381,
"end": 389,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "AS",
"sec_num": null
},
{
"text": "Apart from these differences, it is clear that attention has as much potential for CNNs as it does for LSTMs. As far as we know, this is the first NLP paper that incorporates attention into CNNs. Our ABCNNs get state-of-the-art in AS and TE tasks, and competitive performance in PI, then obtains further improvements over all three tasks when linguistic features are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AS",
"sec_num": null
},
{
"text": "Non-DL on Sentence Pair Modeling. Sentence pair modeling has attracted lots of attention in the past decades. Many tasks can be reduced to a semantic text matching problem. Due to the variety of word choices and inherent ambiguities in natural language, bag-of-word approaches with simple surface-form word matching tend to produce brittle results with poor prediction accuracy (Bilotti et al., 2007) . As a result, researchers put more emphasis on exploiting syntactic and semantic structure. Representative examples include methods based on deeper semantic analysis (Shen and Lapata, 2007; Moldovan et al., 2007) , tree edit-distance (Punyakanok et al., 2004; Heilman and Smith, 2010) and quasi-synchronous grammars (Wang et al., 2007) that match the dependency parse trees of the two sentences. Instead of focusing on the high-level semantic representation, Yih et al. (2013) turn their attention to improving the shallow semantic component, lexical semantics, by performing semantic matching based on a latent word-alignment structure (cf. Chang et al. (2010) ). Lai and Hockenmaier (2014) explore finer-grained word overlap and alignment between two sentences using negation, hypernym, synonym and antonym relations. Yao et al. (2013a) extend word-to-word alignment to phraseto-phrase alignment by a semi-Markov CRF. However, such approaches often require more computational resources. In addition, employing syntactic or semantic parsers -which produce errors on many sentences -to find the best match between the structured representations of two sentences is not trivial.",
"cite_spans": [
{
"start": 378,
"end": 400,
"text": "(Bilotti et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 568,
"end": 591,
"text": "(Shen and Lapata, 2007;",
"ref_id": "BIBREF42"
},
{
"start": 592,
"end": 614,
"text": "Moldovan et al., 2007)",
"ref_id": "BIBREF38"
},
{
"start": 636,
"end": 661,
"text": "(Punyakanok et al., 2004;",
"ref_id": "BIBREF39"
},
{
"start": 662,
"end": 686,
"text": "Heilman and Smith, 2010)",
"ref_id": "BIBREF19"
},
{
"start": 718,
"end": 737,
"text": "(Wang et al., 2007)",
"ref_id": "BIBREF46"
},
{
"start": 1044,
"end": 1063,
"text": "Chang et al. (2010)",
"ref_id": "BIBREF8"
},
{
"start": 1067,
"end": 1093,
"text": "Lai and Hockenmaier (2014)",
"ref_id": "BIBREF27"
},
{
"start": 1222,
"end": 1240,
"text": "Yao et al. (2013a)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "DL on Sentence Pair Modeling. To address some of the challenges of non-DL work, much recent work uses neural networks to model sentence pairs for AS, PI and TE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For AS, Yu et al. (2014) present a bigram CNN to model question and answer candidates. extend this method and get state-of-the-art performance on the WikiQA dataset (Section 5.1). Feng et al. (2015) test various setups of a bi-CNN architecture on an insurance domain QA dataset. Tan et al. (2016) explore bidirectional LSTMs on the same dataset. Our approach is different because we do not model the sentences by two independent neural networks in parallel, but instead as an interdependent sentence pair, using attention.",
"cite_spans": [
{
"start": 8,
"end": 24,
"text": "Yu et al. (2014)",
"ref_id": "BIBREF55"
},
{
"start": 180,
"end": 198,
"text": "Feng et al. (2015)",
"ref_id": "BIBREF15"
},
{
"start": 279,
"end": 296,
"text": "Tan et al. (2016)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For PI, Blacoe and Lapata (2012) form sentence representations by summing up word embeddings. Socher et al. (2011) use recursive autoencoders (RAEs) to model representations of local phrases in sentences, then pool similarity values of phrases from the two sentences as features for binary classification. Yin and Sch\u00fctze (2015a) similarly replace an RAE with a CNN. In all three papers, the representation of one sentence is not influenced by the other -in contrast to our attention-based model. For TE, Bowman et al. (2015b) use recursive neural networks to encode entailment on SICK (Marelli et al., 2014b) . Rockt\u00e4schel et al. (2016) present an attention-based LSTM for the Stanford natural language inference corpus (Bowman et al., 2015a). Our system is the first CNN-based work on TE.",
"cite_spans": [
{
"start": 8,
"end": 32,
"text": "Blacoe and Lapata (2012)",
"ref_id": "BIBREF3"
},
{
"start": 94,
"end": 114,
"text": "Socher et al. (2011)",
"ref_id": "BIBREF43"
},
{
"start": 586,
"end": 609,
"text": "(Marelli et al., 2014b)",
"ref_id": "BIBREF34"
},
{
"start": 612,
"end": 637,
"text": "Rockt\u00e4schel et al. (2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Some prior work aims to solve a general sentence matching problem. Hu et al. (2014) present two CNN architectures, ARC-I and ARC-II, for sentence matching. ARC-I focuses on sentence representation learning while ARC-II focuses on matching features on phrase level. Both systems were tested on PI, sentence completion (SC) and tweetresponse matching. Yin and Sch\u00fctze (2015b) propose the MultiGranCNN architecture to model general sentence matching based on phrase matching on multiple levels of granularity and get promising results for PI and SC. Wan et al. (2016) try to match two sentences in AS and SC by multiple sentence representations, each coming from the local representations of two LSTMs. Our work is the first one to investigate attention for the general sentence matching task.",
"cite_spans": [
{
"start": 67,
"end": 83,
"text": "Hu et al. (2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Attention-Based DL in Non-NLP Domains. Even though there is little if any work on attention mechanisms in CNNs for NLP, attention-based CNNs have been used in computer vision for visual question answering (Chen et al., 2015) , image classification (Xiao et al., 2015) , caption generation , image segmentation (Hong et al., 2016) and object localization (Cao et al., 2015 Attention-Based DL in NLP. Attention-based DL systems have been applied to NLP after their success in computer vision and speech recognition. They mainly rely on RNNs and end-to-end encoderdecoders for tasks such as machine translation and text reconstruction (Li et al., 2015; Rush et al., 2015) . Our work takes the lead in exploring attention mechanisms in CNNs for NLP tasks.",
"cite_spans": [
{
"start": 205,
"end": 224,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 248,
"end": 267,
"text": "(Xiao et al., 2015)",
"ref_id": "BIBREF47"
},
{
"start": 310,
"end": 329,
"text": "(Hong et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 354,
"end": 371,
"text": "(Cao et al., 2015",
"ref_id": "BIBREF7"
},
{
"start": 632,
"end": 649,
"text": "(Li et al., 2015;",
"ref_id": "BIBREF29"
},
{
"start": 650,
"end": 668,
"text": "Rush et al., 2015)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We now introduce our basic (non-attention) CNN that is based on the Siamese architecture (Bromley et al., 1993) , i.e., it consists of two weightsharing CNNs, each processing one of the two sentences, and a final layer that solves the sentence pair task. See Figure 2 . We refer to this architecture as the BCNN. The next section will then introduce the ABCNN, an attention architecture that extends the BCNN. Table 1 gives our notational conventions.",
"cite_spans": [
{
"start": 89,
"end": 111,
"text": "(Bromley et al., 1993)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 259,
"end": 267,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 410,
"end": 417,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "In our implementation and also in the mathematical formalization of the model given below, we pad the two sentences to have the same length s = max(s 0 , s 1 ). However, in the figures we show different lengths because this gives a better intuition of how the model works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "We now describe the BCNN's four types of layers: input, convolution, average pooling and output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "Input layer. In the example in the figure, the two input sentences have 5 and 7 words, respectively. Convolution layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "Let v 1 , v 2 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": ". . , v s be the words of a sentence and c i \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "R w\u2022d 0 , 0 < i < s + w, the concatenated embeddings of v i\u2212w+1 , . . . , v i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "where embeddings for v j are set to zero when j < 1 or j > s. We then generate the representation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "p i \u2208 R d 1 for the phrase v i\u2212w+1 , . . . , v i using the convolution weights W \u2208 R d 1 \u00d7wd 0 as follows: p i = tanh(W \u2022 c i + b) where b \u2208 R d 1 is the bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "Average pooling layer. Pooling (including min, max, average pooling) is commonly used to extract robust features from convolution. In this paper, we introduce attention weighting as an alternative, but use average pooling as a baseline as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "For the output feature map of the last convolution layer, we do column-wise averaging over all columns, denoted as all-ap. This generates a representation vector for each of the two sentences, shown as the top \"Average pooling (all-ap)\" layer below \"Logistic regression\" in Figure 2 . These two vectors are the basis for the sentence pair decision.",
"cite_spans": [],
"ref_spans": [
{
"start": 274,
"end": 282,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "For the output feature map of non-final convolution layers, we do column-wise averaging over windows of w consecutive columns, denoted as w-ap; shown as the lower \"Average pooling (w-ap)\" layer in Figure 2 . For filter width w, a convolution layer transforms an input feature map of s columns into a new feature map of s + w \u2212 1 columns; average pooling transforms this back to s columns. This architecture supports stacking an arbitrary number of convolution-pooling blocks to extract increasingly abstract features. Input features to the bottom layer are words, input features to the next layer are short phrases and so on. Each level generates more abstract features of higher granularity.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 205,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "The last layer is an output layer, chosen according to the task; e.g., for binary classification tasks, this layer is logistic regression (see Figure 2 ). Other types of output layers are introduced below.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "We found that in most cases, performance is boosted if we provide the output of all pooling layers as input to the output layer. For each non-final average pooling layer, we perform w-ap (pooling over windows of w columns) as described above, but we also perform all-ap (pooling over all columns) and forward the result to the output layer. This improves performance because representations from different layers cover the properties of the sentences at different levels of abstraction and all of these levels can be important for a particular sentence pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BCNN: Basic Bi-CNN",
"sec_num": "3"
},
{
"text": "We now describe three architectures based on the BCNN, the ABCNN-1, the ABCNN-2 and the ABCNN-3, that each introduces an attention mechanism for modeling sentence pairs; see Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 182,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "ABCNN-1. The ABCNN-1 (Figure 3(a) ) employs an attention feature matrix A to influence convolution. Attention features are intended to weight those units of s i more highly in convolution that are relevant to a unit of s 1\u2212i (i \u2208 {0, 1}); we use the term \"unit\" here to refer to words on the lowest level and to phrases on higher levels of the network. representation of a unit, a word on the lowest level and a phrase on higher levels. We first describe the attention feature matrix A informally (layer \"Conv input\", middle column, in Figure 3 (a)). A is generated by matching units of the left representation feature map with units of the right representation feature map such that the attention values of row i in A denote the attention distribution of the i-th unit of s 0 with respect to s 1 , and the attention values of column j in A denote the attention distribution of the j-th unit of s 1 with respect to s 0 . A can be viewed as a new feature map of s 0 (resp. s 1 ) in row (resp. column) direction because each row (resp. column) is a new feature vector of a unit in s 0 (resp. s 1 ). Thus, it makes sense to combine this new feature map with the representation feature maps and use both as input to the convolution operation. We achieve this by transforming A into the two blue matrices in Figure 3 (a) that have the same format as the representation feature maps. As a result, the new input of convolution has two feature maps for each sentence (shown in red and blue). Our motivation is that the attention feature map will guide the convolution to learn \"counterpart-biased\" sentence representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 33,
"text": "(Figure 3(a)",
"ref_id": null
},
{
"start": 536,
"end": 544,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1303,
"end": 1312,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "More formally, let F i,r \u2208 R d\u00d7s be the representation feature map of sentence i (i \u2208 {0, 1}). Then we define the attention matrix A \u2208 R s\u00d7s as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A i,j = match-score(F 0,r [:, i], F 1,r [:, j])",
"eq_num": "(1)"
}
],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "The function match-score can be defined in a variety of ways. We found that 1/(1 + |x \u2212 y|) works well where | \u2022 | is Euclidean distance. Given attention matrix A, we generate the attention feature map F i,a for s i as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "F 0,a = W 0 \u2022 A , F 1,a = W 1 \u2022 A The weight matrices W 0 \u2208 R d\u00d7s , W 1 \u2208 R d\u00d7s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "are parameters of the model to be learned in training. 1 We stack the representation feature map F i,r and the attention feature map F i,a as an order 3 tensor and feed it into convolution to generate a higherlevel representation feature map for s i (i \u2208 {0, 1}). In Figure 3(a) , s 0 has 5 units, s 1 has 7. The output of convolution (shown in the top layer, filter width w = 3) is a higher-level representation feature map with 7 columns for s 0 and 9 columns for s 1 .",
"cite_spans": [
{
"start": 55,
"end": 56,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 267,
"end": 278,
"text": "Figure 3(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "ABCNN-2. The ABCNN-1 computes attention weights directly on the input representation with the aim of improving the features computed by convolution. The ABCNN-2 (Figure 3(b) ) instead computes attention weights on the output of convolution with the aim of reweighting this convolution output. In the example shown in Figure 3(b) , the feature maps output by convolution for s 0 and s 1 (layer marked \"Convolution\" in Figure 3(b) ) have 7 and 9 columns, respectively; each column is the representation of a unit. The attention matrix A compares all units in s 0 with all units of s 1 . We sum all attention values for a unit to derive a single attention weight for that unit. This corresponds to summing all values in a row of A for s 0 (\"col-wise sum\", resulting in the column vector of size 7 shown) and summing all values in a column for s 1 (\"row-wise sum\", resulting in the row vector of size 9 shown).",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 173,
"text": "(Figure 3(b)",
"ref_id": null
},
{
"start": 317,
"end": 328,
"text": "Figure 3(b)",
"ref_id": null
},
{
"start": 417,
"end": 428,
"text": "Figure 3(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "More formally, let A \u2208 R s\u00d7s be the attention matrix, a 0,j = A[j, :] the attention weight of unit j in s 0 , a 1,j = A[:, j] the attention weight of unit j in s 1 and F c i,r \u2208 R d\u00d7(s i +w\u22121) the output of convolution for s i . Then the j-th column of the new feature map F p i,r generated by w-ap is derived by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "F p i,r [:, j] = k=j:j+w a i,k F c i,r [:, k], j = 1 . . . s i Note that F p i,r \u2208 R d\u00d7s i , i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "e., ABCNN-2 pooling generates an output feature map of the same size as the input feature map of convolution. This allows us to stack multiple convolution-pooling blocks to extract features of increasing abstraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "There are three main differences between the ABCNN-1 and the ABCNN-2. (i) Attention in the ABCNN-1 impacts convolution indirectly while attention in the ABCNN-2 influences pooling through direct attention weighting. (ii) The ABCNN-1 requires the two matrices W i to convert the attention matrix into attention feature maps; and the input to convolution has two times as many feature maps. Thus, the ABCNN-1 has more parameters than the ABCNN-2 and is more vulnerable to overfitting. (iii) As pooling is performed after convolution, pooling handles larger-granularity units than convolution; e.g., if the input to convolution has word level granularity, then the input to pooling has phrase level granularity, the phrase size being equal to filter size w. Thus, the ABCNN-1 and the ABCNN-2 implement attention mechanisms for linguistic units of different granularity. The complementarity of the ABCNN-1 and the ABCNN-2 motivates us to propose the ABCNN-3, a third architecture that combines elements of the two.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "ABCNN-3 (Figure 3(c) ) combines the ABCNN-1 and the ABCNN-2 by stacking them; it combines the strengths of the ABCNN-1 and -2 by allowing the attention mechanism to operate (i) both on the convolution and on the pooling parts of a convolutionpooling block and (ii) both on the input granularity and on the more abstract output granularity.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 20,
"text": "(Figure 3(c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "ABCNN: Attention-Based BCNN",
"sec_num": "4"
},
{
"text": "We test the proposed architectures on three tasks: answer selection (AS), paraphrase identification (PI) and textual entailment (TE).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Common Training Setup. Words are initialized by 300-dimensional word2vec embeddings and not changed during training. A single randomly initialized embedding is created for all unknown words by uniform sampling from [-.01,.01]. We employ Adagrad (Duchi et al., 2011) and L 2 regularization.",
"cite_spans": [
{
"start": 245,
"end": 265,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Network Configuration. Each network in the experiments below consists of (i) an initialization block b 1 that initializes words by word2vec embeddings, (ii) a stack of k \u2212 1 convolution-pooling blocks b 2 , . . . , b k , computing increasingly abstract features, and (iii) one final LR layer (logistic regression layer) as shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 332,
"end": 340,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The input to the LR layer consists of kn features -each block provides n similarity scores, e.g., n cosine similarity scores. Figure 2 shows the two sentence vectors output by the final block b k of the stack (\"sentence representation 0\", \"sentence representation 1\"); this is the basis of the last n similarity scores. As we explained in the final paragraph of Section 3, we perform all-ap pooling for all blocks, not just for b k . Thus we get one sentence representation each for s 0 and s 1 for each block b 1 , . . . , b k . We compute n similarity scores for each block (based on the block's two sentence representations). Thus, we compute a total of kn similarity scores and these scores are input to the LR layer. Depending on the task, we use different methods for computing the similarity score: see below.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 134,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Layerwise Training. In our training regime, we first train a network consisting of just one convolution-pooling block b 2 . We then create a new network by adding a block b 3 , initialize its b 2 block with the previously learned weights for b 2 and train b 3 keeping the previously learned weights for b 2 fixed. We repeat this procedure until all k \u2212 1 convolution-pooling blocks are trained. We found that this training regime gives us good performance and shortens training times considerably. Since similarity scores of lower blocks are kept unchanged once they have been learned, this also has the nice effect that \"simple\" similarity scores (those based on surface features) are learned first and subsequent training phases can focus on complementary scores derived from more complex abstract features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Classifier. We found that performance increases if we do not use the output of the LR layer as the final decision, but instead train a linear SVM or a logistic regression with default parameters 2 directly on the input to the LR layer (i.e., on the kn similarity scores that are generated by the k-block stack after network training is completed). Direct training of SVMs/LR seems to get closer to the global optimum than gradient descent training of CNNs. Table 2 shows hyperparameters, tuned on dev. We use addition and LSTMs as two shared baselines for all three tasks, i.e., for AS, PI and TE. We now describe these two shared baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 457,
"end": 464,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "(i) Addition. We sum up word embeddings element-wise to form each sentence representation. The classifier input is then the concatenation of the two sentence representations. (ii) A-LSTM. Before this work, most attention mechanisms in NLP were implemented in recurrent neural networks for text generation tasks such as machine translation (e.g., , ). Rockt\u00e4schel et al. (2016) present an attention-LSTM for natural language inference. Since this model is the pioneering attention based RNN system for sentence pair classification, we consider it as a baseline system (\"A-LSTM\") for all our three tasks. The A-LSTM has the same configuration as our ABCNNs in terms of word initialization (300-dimensional word2vec embeddings) and the dimensionality of all hidden layers (50).",
"cite_spans": [
{
"start": 351,
"end": 376,
"text": "Rockt\u00e4schel et al. (2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use WikiQA, 3 an open domain question-answer dataset. We use the subtask that assumes that there is at least one correct answer for a question. The corresponding dataset consists of 20,360 questioncandidate pairs in train, 1,130 pairs in dev and 2,352 pairs in test where we adopt the standard setup of only considering questions with correct answers in test. Following Yang et al. 2015, we truncate answers to 40 tokens.",
"cite_spans": [
{
"start": 7,
"end": 14,
"text": "WikiQA,",
"ref_id": null
},
{
"start": 15,
"end": 16,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Selection",
"sec_num": "5.1"
},
{
"text": "The task is to rank the candidate answers based on their relatedness to the question. Evaluation measures are mean average precision (MAP) and mean reciprocal rank (MRR).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Selection",
"sec_num": "5.1"
},
{
"text": "Task-Specific Setup. We use cosine similarity as the similarity score for AS. In addition, we use sentence lengths, WordCnt (count of the number of nonstopwords in the question that also occur in the answer) and WgtWordCnt (reweight the counts by the IDF values of the question words). Thus, the final input to the LR layer has size k + 4: one cosine for each of the k blocks and the four additional features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Selection",
"sec_num": "5.1"
},
{
"text": "We compare with seven baselines. The first three are considered by : (i) WordCnt; (ii) WgtWordCnt; (iii) CNN-Cnt (the state-of-theart system): combine CNN with (i) and (ii). Apart from the baselines considered by , we compare with two Addition baselines and two LSTM baselines. Addition and A-LSTM are the shared baselines described before. We also combine both with the four extra features; this gives us two additional baselines that we refer to as Addition(+) and A-LSTM(+).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Selection",
"sec_num": "5.1"
},
{
"text": "3 http://aka.ms/WikiQA Results. Table 3 shows performance of the baselines, of the BCNN and of the three ABCNNs. For CNNs, we test one (one-conv) and two (two-conv) convolution-pooling blocks.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 39,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Answer Selection",
"sec_num": "5.1"
},
{
"text": "The non-attention network BCNN already performs better than the baselines. If we add attention mechanisms, then the performance further improves by several points. Comparing the ABCNN-2 with the ABCNN-1, we find the ABCNN-2 is slightly better even though the ABCNN-2 is the simpler architecture. If we combine the ABCNN-1 and the ABCNN-2 to form the ABCNN-3, we get further improvement. 4 This can be explained by the ABCNN-3's ability to take attention of finer-grained granularity into consideration in each convolution-pooling block while the ABCNN-1 and the ABCNN-2 consider attention only at convolution input or only at pooling input, respectively. We also find that stacking two convolution-pooling blocks does not bring consistent improvement and therefore do not test deeper architectures.",
"cite_spans": [
{
"start": 387,
"end": 388,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Selection",
"sec_num": "5.1"
},
{
"text": "We use the Microsoft Research Paraphrase (MSRP) corpus (Dolan et al., 2004) . The training set contains 2753 true / 1323 false and the test set 1147 true / 578 false paraphrase pairs. We randomly select 400 pairs from train and use them as dev; but we still report results for training on the entire training set. For each triple (label, s 0 , s 1 ) in the training set, we also add (label, s 1 , s 0 ) to the training set to make best use of the training data. Systems are evaluated by accuracy and F 1 .",
"cite_spans": [
{
"start": 55,
"end": 75,
"text": "(Dolan et al., 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Identification",
"sec_num": "5.2"
},
{
"text": "Task-Specific Setup. In this task, we add the 15 MT features from (Madnani et al., 2012) and the lengths of the two sentences. In addition, we compute ROUGE-1, ROUGE-2 and ROUGE-SU4 (Lin, 2004) , which are scores measuring the match between the two sentences on (i) unigrams, (ii) bigrams and (iii) unigrams and skip-bigrams (maximum skip distance of four), respectively. In this task, we found transforming Euclidean distance into similarity score by 1/(1 + |x \u2212 y|) performs better than cosine similarity. Additionally, we use dynamic pooling (Yin and Sch\u00fctze, 2015a) of the attention matrix A in Equation 1and forward pooled values of all blocks to the classifier. This gives us better performance than only forwarding sentence-level matching features. We compare our system with representative DL approaches: (i) A-LSTM; (ii) A-LSTM(+): A-LSTM plus handcrafted features; (iii) RAE (Socher et al., 2011) , recursive autoencoder; (iv) Bi-CNN-MI (Yin and Sch\u00fctze, 2015a), a bi-CNN architecture; and (v) MPSSM-CNN (He et al., 2015) , the state-of-the-art NN system for PI, and the following four non-DL systems: (vi) Addition; (vii) Addition(+): Addition plus handcrafted features; (viii) MT (Madnani et al., 2012) , a system that combines machine translation metrics; 5 (ix) MF-TF-KLD (Ji and Eisenstein, 2013) , the state-of-the-art non-NN system.",
"cite_spans": [
{
"start": 66,
"end": 88,
"text": "(Madnani et al., 2012)",
"ref_id": "BIBREF32"
},
{
"start": 182,
"end": 193,
"text": "(Lin, 2004)",
"ref_id": "BIBREF30"
},
{
"start": 545,
"end": 569,
"text": "(Yin and Sch\u00fctze, 2015a)",
"ref_id": "BIBREF53"
},
{
"start": 885,
"end": 906,
"text": "(Socher et al., 2011)",
"ref_id": "BIBREF43"
},
{
"start": 1014,
"end": 1031,
"text": "(He et al., 2015)",
"ref_id": "BIBREF18"
},
{
"start": 1192,
"end": 1214,
"text": "(Madnani et al., 2012)",
"ref_id": "BIBREF32"
},
{
"start": 1286,
"end": 1311,
"text": "(Ji and Eisenstein, 2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Identification",
"sec_num": "5.2"
},
{
"text": "Results. Table 4 shows that the BCNN is slightly worse than the state-of-the-art whereas the ABCNN-1 roughly matches it. The ABCNN-2 is slightly above the state-of-the-art. The ABCNN-3 outperforms the state-of-the-art in accuracy and F 1 . 6 Two convolution layers only bring small improvements over one. ",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Paraphrase Identification",
"sec_num": "5.2"
},
{
"text": "SemEval 2014 Task 1 (Marelli et al., 2014a ) evaluates system predictions of textual entailment (TE) relations on sentence pairs from the SICK dataset (Marelli et al., 2014b) . The three classes are entailment, contradiction and neutral. The sizes of SICK train, dev and test sets are 4439, 495 and 4906 pairs, respectively. We call this dataset ORIG. We also create NONOVER, a copy of ORIG in which words occurring in both sentences are removed. A sentence in NONOVER is denoted by the special token <empty> if all words are removed. Table 5 shows three pairs from ORIG and their transformation in NONOVER. We observe that focusing on the non-overlapping parts provides clearer hints for TE than ORIG. In this task, we run two copies of each network, one for ORIG, one for NONOVER; these two networks have a single common LR layer.",
"cite_spans": [
{
"start": 20,
"end": 42,
"text": "(Marelli et al., 2014a",
"ref_id": "BIBREF33"
},
{
"start": 151,
"end": 174,
"text": "(Marelli et al., 2014b)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 535,
"end": 542,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "5.3"
},
{
"text": "Like Lai and Hockenmaier (2014) , we train our final system (after fixing hyperparameters) on train and dev (4934 pairs). Eval measure is accuracy.",
"cite_spans": [
{
"start": 5,
"end": 31,
"text": "Lai and Hockenmaier (2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "5.3"
},
{
"text": "Task-Specific Setup. We found that for this task forwarding two similarity scores from each block (instead of just one) is helpful. We use cosine similarity and Euclidean distance. As we did for paraphrase identification, we add the 15 MT features for each sentence pair for this task as well; our motivation is that entailed sentences resemble paraphrases more than contradictory sentences do. We use the following linguistic features. Negation is important for detecting contradiction. Feature NEG is set to 1 if either sentence contains \"no\", \"not\", \"nobody\", \"isn't\" and to 0 otherwise. Following Lai and Hockenmaier (2014) , we use Word-Net (Miller, 1995) to detect nyms: synonyms, hypernyms and antonyms in the pairs. But we do this on NONOVER (not on ORIG) to focus on what is critical for TE. Specifically, feature SYN is the number of word pairs in s 0 and s 1 that are synonyms. HYP0 (resp. HYP1) is the number of words in s 0 (resp. s 1 ) that have a hypernym in s 1 (resp. s 0 ). In addition, we collect all potential antonym pairs (PAP) in NONOVER. We identify the matched chunks that occur in contradictory and neutral, but not in entailed pairs. We exclude synonyms and hypernyms and apply a frequency filter of n = 2. In contrast to Lai and Hockenmaier (2014), we constrain the PAP pairs to cosine similarity above 0.4 in word2vec embedding space as this discards many noise pairs. Feature ANT is the number of matched PAP antonyms in a sentence pair. As before we use sentence lengths, both for ORIG (LEN0O: length s 0 , LEN1O: length s 1 ) and for NONOVER (LEN0N: length s 0 , LEN1N: length s 1 ).",
"cite_spans": [
{
"start": 601,
"end": 627,
"text": "Lai and Hockenmaier (2014)",
"ref_id": "BIBREF27"
},
{
"start": 637,
"end": 660,
"text": "Word-Net (Miller, 1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "5.3"
},
{
"text": "On the whole, we have 24 extra features: 15 MT metrics, NEG, SYN, HYP0, HYP1, ANT, LEN0O, LEN1O, LEN0N and LEN1N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "5.3"
},
{
"text": "Apart from the Addition and LSTM baselines, we further compare with the top-3 systems in SemEval and TrRNTN (Bowman et al., 2015b) , a recursive neural network developed for this SICK task.",
"cite_spans": [
{
"start": 101,
"end": 130,
"text": "TrRNTN (Bowman et al., 2015b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "5.3"
},
{
"text": "Results. Table 6 shows that our CNNs outperform A-LSTM (with or without linguistic features added) and the top three SemEval systems. Comparing ABCNNs with the BCNN, attention mechanisms consistently improve performance. The ABCNN-1 has performance comparable to the ABCNN-2 while method acc Sem-Eval Top3 (Jimenez et al., 2014) 83.1 (Zhao et al., 2014) 83.6 (Lai and Hockenmaier, 2014) 84.6 TrRNTN (Bowman et al., 2015b) 76.9 the ABCNN-3 is better still: a boost of 1.6 points compared to the previous state of the art. 7 Visual Analysis. Figure 4 visualizes the attention matrices for one TE sentence pair in the ABCNN-2 for blocks b 1 (unigrams), b 2 (first convolutional layer) and b 3 (second convolutional layer). Darker shades of blue indicate stronger attention values.",
"cite_spans": [
{
"start": 306,
"end": 328,
"text": "(Jimenez et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 334,
"end": 353,
"text": "(Zhao et al., 2014)",
"ref_id": "BIBREF56"
},
{
"start": 359,
"end": 367,
"text": "(Lai and",
"ref_id": "BIBREF27"
},
{
"start": 368,
"end": 421,
"text": "Hockenmaier, 2014) 84.6 TrRNTN (Bowman et al., 2015b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 6",
"ref_id": "TABREF11"
},
{
"start": 540,
"end": 548,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "5.3"
},
{
"text": "In Figure 4 (top), each word corresponds to exactly one row or column. We can see that words in s i with semantic equivalents in s 1\u2212i get high attention while words without semantic equivalents get low attention, e.g., \"walking\" and \"murals\" in s 0 and \"front\" and \"colorful\" in s 1 . This behavior seems reasonable for the unigram level.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "5.3"
},
{
"text": "Rows/columns of the attention matrix in Figure 4 (middle) correspond to phrases of length three since filter width w = 3. High attention values generally correlate with close semantic correspondence: the phrase \"people are\" in s 0 matches \"several people are\" in s 1 ; both \"are walking outside\" and \"walking outside the\" in s 0 match \"are in front\" in s 1 ; \"the building that\" in s 0 matches \"a colorful building\" in s 1 . More interestingly, looking at the bottom right corner, both \"on it\" and \"it\" in s 0 match \"building\" in s 1 ; this indicates that ABCNNs are able to detect some coreference across sentences. \"building\" in s 1 has two places in which higher attentions appear, one is with \"it\" in s 0 , the other is with \"the building that\" in s 0 . This may indicate that ABCNNs recognize that \"building\" in s 1 and \"the building that\" / \"it\" in s 0 refer to the same object. Hence, coreference resolution across sentences as well as within a sentence both are detected. For the attention vectors on the left and the top, we can see that attention has focused on the key parts: \"people are walking outside the building that\" in s 0 , \"several people are in\" and \"of a colorful building\" in s 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "5.3"
},
{
"text": "Rows/columns of the attention matrix in Figure 4 (bottom, second layer of convolution) correspond to phrases of length 5 since filter width w = 3 in both convolution layers (5 = 1 + 2 * (3 \u2212 1)). We use \". . .\" to denote words in the middle if a phrase like \"several...front\" has more than two words. We can see that attention distribution in the matrix has focused on some local regions. As granularity of phrases is larger, it makes sense that the attention values are smoother. But we still can find some interesting clues: at the two ends of the main diagonal, higher attentions hint that the first part of s 0 matches well with the first part of s 1 ; \"several murals on it\" in s 0 matches well with \"of a colorful building\" in s 1 , which satisfies the intuition that these two phrases are crucial for making a decision on TE in this case. This again shows the potential strength of our system in figuring out which parts of the two sentences refer to the same object. In addition, in the central part of the matrix, we can see that the long phrase \"people are walking outside the building\" in s 0 matches well with the long phrase \"are in front of a colorful building\" in s 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "5.3"
},
{
"text": "We presented three mechanisms to integrate attention into CNNs for general sentence pair modeling tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
},
{
"text": "Our experiments on AS, PI and TE show that attention-based CNNs perform better than CNNs without attention mechanisms. The ABCNN-2 generally outperforms the ABCNN-1 and the ABCNN-3 surpasses both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
},
{
"text": "In all tasks, we did not find any big improvement of two layers of convolution over one layer. This is probably due to the limited size of training data. We expect that, as larger training sets become available, deep ABCNNs will show even better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
},
{
"text": "In addition, linguistic features contribute in all three tasks: improvements by 0.0321 (MAP) and 0.0338 (MRR) for AS, improvements by 3.8 (acc) and 2.1 (F 1 ) for PI and an improvement by 1.6 (acc) for TE. But our ABCNNs can still reach or surpass state-of-the-art even without those features in AS and TE tasks. This indicates that ABCNNs are generally strong NN systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
},
{
"text": "Attention-based LSTMs are especially successful in tasks with a strong generation component like machine translation (discussed in Sec. 2). CNNs have not been used for this type of task. This is an interesting area of future work for attention-based CNNs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6"
},
{
"text": "The weights of the two matrices are shared in our implementation to reduce the number of parameters of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://scikit-learn.org/stable/ for both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If we limit the input to the LR layer to the k similarity scores in the ABCNN-3 (two-conv), results are .660 (MAP) / .677 (MRR).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For better comparability of approaches in our experiments, we use a simple SVM classifier, which performs slightly worse thanMadnani et al. (2012)'s more complex meta-classifier.6 Improvement of .3 (acc) and .1 (F1) over state-of-the-art is not significant. The ABCNN-3 (two-conv) without \"linguistic\" features (i.e., MT and ROUGE) achieves 75.1/82.7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If we run the ABCNN-3 (two-conv) without the 24 linguistic features, performance is 84.6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We gratefully acknowledge the support of Deutsche Forschungsgemeinschaft (DFG): grant SCHU 2246/8-2.We would like to thank the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multiple object recognition with visual attention",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Volodymyr",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. 2015. Multiple object recognition with visual atten- tion. In Proceedings of ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Structured retrieval for question answering",
"authors": [
{
"first": "Matthew",
"middle": [
"W"
],
"last": "Bilotti",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Ogilvie",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Callan",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "351--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew W. Bilotti, Paul Ogilvie, Jamie Callan, and Eric Nyberg. 2007. Structured retrieval for question an- swering. In Proceedings of SIGIR, pages 351-358.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A comparison of vector-based representations for semantic composition",
"authors": [
{
"first": "William",
"middle": [],
"last": "Blacoe",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "546--556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composi- tion. In Proceedings of EMNLP-CoNLL, pages 546- 556.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015a. A large anno- tated corpus for learning natural language inference. In Proceedings of EMNLP, pages 632-642.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Recursive neural networks can learn logical semantics",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of CVSC Workshop",
"volume": "",
"issue": "",
"pages": "12--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Christopher Potts, and Christo- pher D. Manning. 2015b. Recursive neural networks can learn logical semantics. In Proceedings of CVSC Workshop, pages 12-21.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Signature verification using A \"siamese",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Bromley",
"suffix": ""
},
{
"first": "James",
"middle": [
"W"
],
"last": "Bentz",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Guyon",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Cliff",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "S\u00e4ckinger",
"suffix": ""
},
{
"first": "Roopak",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 1993,
"venue": "time delay neural network. IJPRAI",
"volume": "7",
"issue": "4",
"pages": "669--688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Bromley, James W. Bentz, L\u00e9on Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard S\u00e4ckinger, and Roopak Shah. 1993. Signature verification us- ing A \"siamese\" time delay neural network. IJPRAI, 7(4):669-688.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks",
"authors": [
{
"first": "Chunshui",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Xianming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yinan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zilei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yongzhen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"S"
],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICCV",
"volume": "",
"issue": "",
"pages": "2956--2964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunshui Cao, Xianming Liu, Yi Yang, Yinan Yu, Jiang Wang, Zilei Wang, Yongzhen Huang, Liang Wang, Chang Huang, Wei Xu, Deva Ramanan, and Thomas S. Huang. 2015. Look and think twice: Cap- turing top-down visual attention with feedback con- volutional neural networks. In Proceedings of ICCV, pages 2956-2964.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discriminative learning over constrained latent representations",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Goldwasser",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Srikumar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "429--437",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming-Wei Chang, Dan Goldwasser, Dan Roth, and Vivek Srikumar. 2010. Discriminative learning over con- strained latent representations. In Proceedings of NAACL-HLT, pages 429-437.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "ABC-CNN: An attention based convolutional neural network for visual question answering",
"authors": [
{
"first": "Kan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Liang-Chieh",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Haoyuan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ram",
"middle": [],
"last": "Nevatia",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, and Ram Nevatia. 2015. ABC-CNN: An attention based convolutional neural network for visual question answering. CoRR, abs/1511.05960.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "End-to-end continuous speech recognition using attention-based recurrent NN: First results",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Deep Learning and Representation Learning Workshop, NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. End-to-end continuous speech recognition using attention-based recurrent NN: First results. In Proceedings of Deep Learning and Repre- sentation Learning Workshop, NIPS.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Attention-based models for speech recognition",
"authors": [],
"year": null,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "577--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attention-based models for speech recognition. In Proceedings of NIPS, pages 577-585.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "350--356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Pro- ceedings of COLING, pages 350-356.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "JMLR",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121-2159.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Applying deep learning to answer selection: A study and an open task",
"authors": [
{
"first": "Minwei",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Glass",
"suffix": ""
},
{
"first": "Lidan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of IEEE ASRU Workshop",
"volume": "",
"issue": "",
"pages": "813--820",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, and Bowen Zhou. 2015. Applying deep learn- ing to answer selection: A study and an open task. In Proceedings of IEEE ASRU Workshop, pages 813- 820.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "DRAW: A recurrent neural network for image generation",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Jimenez Rezende",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "1462--1471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Jimenez Rezende, and Daan Wierstra. 2015. DRAW: A recurrent neural network for image generation. In Proceedings of ICML, pages 1462-1471.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multiperspective sentence similarity modeling with convolutional neural networks",
"authors": [
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1576--1586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hua He, Kevin Gimpel, and Jimmy Lin. 2015. Multi- perspective sentence similarity modeling with convo- lutional neural networks. In Proceedings of EMNLP, pages 1576-1586.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Tree edit models for recognizing textual entailments, paraphrases, and answers to questions",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1011--1019",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman and Noah A. Smith. 2010. Tree edit models for recognizing textual entailments, para- phrases, and answers to questions. In Proceedings of NAACL-HLT, pages 1011-1019.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning transferrable knowledge for semantic segmentation with deep convolutional neural network",
"authors": [
{
"first": "Seunghoon",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Junhyuk",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Bohyung",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seunghoon Hong, Junhyuk Oh, Honglak Lee, and Bo- hyung Han. 2016. Learning transferrable knowl- edge for semantic segmentation with deep convolu- tional neural network. In Proceedings of CVPR.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Convolutional neural network architectures for matching natural language sentences",
"authors": [
{
"first": "Baotian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "2042--2050",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Proceedings of NIPS, pages 2042-2050.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Discriminative improvements to distributional sentence similarity",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "891--896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangfeng Ji and Jacob Eisenstein. 2013. Discriminative improvements to distributional sentence similarity. In Proceedings of EMNLP, pages 891-896.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "UNAL-NLP: Combining soft cardinality features for semantic textual similarity, relatedness and entailment",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Due\u00f1as",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Baquero",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Se-mEval",
"volume": "",
"issue": "",
"pages": "732--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergio Jimenez, George Due\u00f1as, Julia Baquero, and Alexander Gelbukh. 2014. UNAL-NLP: Combining soft cardinality features for semantic textual similar- ity, relatedness and entailment. In Proceedings of Se- mEval, pages 732-742.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A convolutional neural network for modelling sentences",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "655--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for mod- elling sentences. In Proceedings of ACL, pages 655- 665.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. In Proceedings of EMNLP, pages 1746-1751.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Illinois-LH: A denotational and distributional approach to semantics",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "329--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice Lai and Julia Hockenmaier. 2014. Illinois-LH: A denotational and distributional approach to semantics. In Proceedings of SemEval, pages 329-334.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Gradient-based learning applied to document recognition",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Haffner",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the IEEE",
"volume": "",
"issue": "",
"pages": "2278--2324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pages 2278-2324.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A hierarchical neural autoencoder for paragraphs and documents",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1106--1115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In Proceedings of ACL, pages 1106-1115.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL Text Summarization Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of the ACL Text Summarization Workshop.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of EMNLP, pages 1412-1421.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Re-examining machine translation metrics for paraphrase identification",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "182--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani, Joel Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Proceedings of NAACL- HLT, pages 182-190.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zampar- elli. 2014a. Semeval-2014 task 1: Evaluation of com- positional distributional semantic models on full sen- tences through semantic relatedness and textual entail- ment. In Proceedings of SemEval, pages 1-8.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A SICK cure for the evaluation of compositional distributional semantic models",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "216--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zampar- elli. 2014b. A SICK cure for the evaluation of com- positional distributional semantic models. In Proceed- ings of LREC, pages 216-223.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed rep- resentations of words and phrases and their composi- tionality. In Proceedings of NIPS, pages 3111-3119.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "WordNet: A lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Commun. ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: A lexical database for english. Commun. ACM, 38(11):39-41.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Recurrent models of visual attention",
"authors": [
{
"first": "Volodymyr",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Heess",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "2204--2212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volodymyr Mnih, Nicolas Heess, Alex Graves, and Ko- ray Kavukcuoglu. 2014. Recurrent models of visual attention. In Proceedings of NIPS, pages 2204-2212.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Cogex: A semantically and contextually enriched logic prover for question answering",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hodges",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Applied Logic",
"volume": "5",
"issue": "1",
"pages": "49--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Moldovan, Christine Clark, Sanda Harabagiu, and Daniel Hodges. 2007. Cogex: A semantically and contextually enriched logic prover for question an- swering. Journal of Applied Logic, 5(1):49-69.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Mapping dependencies trees: An application to question answering",
"authors": [
{
"first": "Vasin",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of AI&Math 2004 (Special session: Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2004. Mapping dependencies trees: An application to ques- tion answering. In Proceedings of AI&Math 2004 (Special session: Intelligent Text Processing).",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, and Phil Blunsom",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Her- mann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, and Phil Blunsom. 2016. Rea- soning about entailment with neural attention. In Pro- ceedings of ICLR.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "379--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of EMNLP, pages 379-389.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Using semantic roles to improve question answering",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "12--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In Proceedings of EMNLP-CoNLL, pages 12-21.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "801--809",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Eric H. Huang, Jeffrey Pennington, An- drew Y. Ng, and Christopher D. Manning. 2011. Dy- namic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of NIPS, pages 801-809.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "LSTMbased deep learning models for non-factoid answer selection",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ICLR Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Tan, Bing Xiang, and Bowen Zhou. 2016. LSTM- based deep learning models for non-factoid answer se- lection. In Proceedings of ICLR Workshop.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "A deep architecture for semantic matching with multiple positional sentence representations",
"authors": [
{
"first": "Yanyan",
"middle": [],
"last": "Shengxian Wan",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "2835--2841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, and Xueqi Cheng. 2016. A deep architecture for semantic matching with multiple positional sentence representations. In Proceedings of AAAI, pages 2835- 2841.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "What is the jeopardy model? A quasisynchronous grammar for QA",
"authors": [
{
"first": "Mengqiu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "22--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mengqiu Wang, Noah A. Smith, and Teruko Mitamura. 2007. What is the jeopardy model? A quasi- synchronous grammar for QA. In Proceedings of EMNLP-CoNLL, pages 22-32.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "The application of two-level attention models in deep convolutional neural network for fine-grained image classification",
"authors": [
{
"first": "Tianjun",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yichong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kuiyuan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jiaxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuxin",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of CVPR",
"volume": "",
"issue": "",
"pages": "842--850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianjun Xiao, Yichong Xu, Kuiyuan Yang, Jiaxing Zhang, Yuxin Peng, and Zheng Zhang. 2015. The application of two-level attention models in deep con- volutional neural network for fine-grained image clas- sification. In Proceedings of CVPR, pages 842-850.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "2048--2057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. In Proceedings of ICML, pages 2048-2057.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "WikiQA: A challenge dataset for open-domain question answering",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "2013--2018",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain ques- tion answering. In Proceedings of EMNLP, pages 2013-2018.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Semi-markov phrasebased monolingual alignment",
"authors": [
{
"first": "Xuchen",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "590--600",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuchen Yao, Benjamin Van Durme, Chris Callison- Burch, and Peter Clark. 2013a. Semi-markov phrase- based monolingual alignment. In Proceedings of EMNLP, pages 590-600.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Automatic coupling of answer extraction and information retrieval",
"authors": [
{
"first": "Xuchen",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "159--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuchen Yao, Benjamin Van Durme, and Peter Clark. 2013b. Automatic coupling of answer extraction and information retrieval. In Proceedings of ACL, pages 159-165.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Question answering using enhanced lexical semantic models",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Wen-Tau Yih",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Andrzej",
"middle": [],
"last": "Meek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pastusiak",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1744--1753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of ACL, pages 1744-1753.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Convolutional neural network for paraphrase identification",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "901--911",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2015a. Convolu- tional neural network for paraphrase identification. In Proceedings of NAACL-HLT, pages 901-911.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Multi-GranCNN: An architecture for general matching of text chunks on multiple levels of granularity",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "63--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2015b. Multi- GranCNN: An architecture for general matching of text chunks on multiple levels of granularity. In Pro- ceedings of ACL-IJCNLP, pages 63-73.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Deep learning for answer sentence selection",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Deep Learning and Representation Learning Workshop, NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sen- tence selection. In Proceedings of Deep Learning and Representation Learning Workshop, NIPS.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "ECNU: One stone two birds: Ensemble of heterogenous measures for semantic relatedness and textual entailment",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tiantian",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "271--277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Zhao, Tiantian Zhu, and Man Lan. 2014. ECNU: One stone two birds: Ensemble of heterogenous mea- sures for semantic relatedness and textual entailment. In Proceedings of SemEval, pages 271-277.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Positive (< s 0 , s + 1 >) and negative (< s 0 , s \u2212 1 >) examples for AS, PI and TE tasks. RH = Random House",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Transactions of the Association for Computational Linguistics, vol. 4, pp. 259-272, 2016. Action Editor: Brian Roark. Submission batch: 12/2015; Revision batch: 3/2016; Published 6/2016. c 2016 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "BCNN: ABCNN without Attention Each word is represented as a d 0 -dimensional precomputed word2vec (Mikolov et al., 2013) embedding, d 0 = 300. As a result, each sentence is represented as a feature map of dimension d 0 \u00d7 s.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "shows two unit representation feature maps in red: this part of the ABCNN-1 is the same as in the BCNN (seeFigure 2). Each column is the(a) One block in ABCNN-1 (b) One block in ABCNN-2 (c) One block in ABCNN-3 Figure 3: Three ABCNN architectures",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF5": {
"text": "Attention visualization for TE. Top: unigrams, b 1 . Middle: conv1, b 2 . Bottom: conv2, b 3 .",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Notation</td></tr><tr><td>Mnih et al. (2014) apply attention in recurrent</td></tr><tr><td>neural networks (RNNs) to extract information from</td></tr><tr><td>an image or video by adaptively selecting a se-</td></tr><tr><td>quence of regions or locations and only processing</td></tr><tr><td>the selected regions at high resolution. Gregor et al.</td></tr><tr><td>(2015) combine a spatial attention mechanism with</td></tr><tr><td>RNNs for image generation. Ba et al. (2015) inves-</td></tr><tr><td>tigate attention-based RNNs for recognizing multi-</td></tr><tr><td>ple objects in images. Chorowski et al. (2014) and</td></tr><tr><td>Chorowski et al. (2015) use attention in RNNs for</td></tr><tr><td>speech recognition.</td></tr></table>"
},
"TABREF2": {
"text": ".08 4 .0004 .08 3 .0002 .08 3 .0006 ABCNN-1 2 .085 4 .0006 .085 3 .0003 .085 3 .0006 ABCNN-2 1 .05 4 .0003 .085 3 .0001 .09 3 .00065 ABCNN-2 2 .06 4 .0006 .085 3 .0001 .085 3 .0007 ABCNN-3 1 .05 4 .0003 .05 3 .0003 .09 3 .0007 ABCNN-3 2 .06 4 .0006 .055 3 .0005 .09 3 .0007",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>AS</td><td>PI</td><td>TE</td></tr><tr><td>#CL</td><td>lr w L2</td><td>lr w L2</td><td>lr w L2</td></tr><tr><td>ABCNN-1 1</td><td/><td/><td/></tr></table>"
},
"TABREF3": {
"text": "Hyperparameters. lr: learning rate. #CL: number convolution layers. w: filter width. The number of convolution kernels d i (i > 0) is 50 throughout.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF5": {
"text": "Results on WikiQA. Best result per column is bold. Significant improvements over state-of-the-art baselines (underlined) are marked with * (t-test, p < .05).",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF7": {
"text": "Results for PI on MSRP",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF9": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>: SICK data: Converting the original sentences (ORIG) into the NONOVER format</td></tr></table>"
},
"TABREF11": {
"text": "Results on SICK. Significant improvements over(Lai and Hockenmaier, 2014) are marked with * (test of equal proportions, p < .05).",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}