ACL-OCL / Base_JSON /prefixK /json /K17 /K17-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K17-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:08:16.295921Z"
},
"title": "Learning Contextual Embeddings for Structural Semantic Similarity using Categorical Information",
"authors": [
{
"first": "Massimo",
"middle": [],
"last": "Nicosia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Trento",
"location": {
"postCode": "38123",
"settlement": "Povo (TN)",
"country": "Italy Qatar"
}
},
"email": "m.nicosia@gmail.com"
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": "",
"affiliation": {},
"email": "amoschitti@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Tree kernels (TKs) and neural networks are two effective approaches for automatic feature engineering. In this paper, we combine them by modeling context word similarity in semantic TKs. This way, the latter can operate subtree matching by applying neural-based similarity on tree lexical nodes. We study how to learn representations for the words in context such that TKs can exploit more focused information. We found that neural embeddings produced by current methods do not provide a suitable contextual similarity. Thus, we define a new approach based on a Siamese Network, which produces word representations while learning a binary text similarity. We set the latter considering examples in the same category as similar. The experiments on question and sentiment classification show that our semantic TK highly improves previous results.",
"pdf_parse": {
"paper_id": "K17-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "Tree kernels (TKs) and neural networks are two effective approaches for automatic feature engineering. In this paper, we combine them by modeling context word similarity in semantic TKs. This way, the latter can operate subtree matching by applying neural-based similarity on tree lexical nodes. We study how to learn representations for the words in context such that TKs can exploit more focused information. We found that neural embeddings produced by current methods do not provide a suitable contextual similarity. Thus, we define a new approach based on a Siamese Network, which produces word representations while learning a binary text similarity. We set the latter considering examples in the same category as similar. The experiments on question and sentiment classification show that our semantic TK highly improves previous results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Structural Kernels (Moschitti, 2006) can automatically represent syntactic and semantic structures in terms of substructures, showing high accuracy in several tasks, e.g., relation extraction (Nguyen et al., 2009; Nguyen and Moschitti, 2011; Plank and Moschitti, 2013; Nguyen et al., 2015) and sentiment analysis (Nguyen and Shirai, 2015) .",
"cite_spans": [
{
"start": 19,
"end": 36,
"text": "(Moschitti, 2006)",
"ref_id": "BIBREF23"
},
{
"start": 192,
"end": 213,
"text": "(Nguyen et al., 2009;",
"ref_id": "BIBREF31"
},
{
"start": 214,
"end": 241,
"text": "Nguyen and Moschitti, 2011;",
"ref_id": "BIBREF30"
},
{
"start": 242,
"end": 268,
"text": "Plank and Moschitti, 2013;",
"ref_id": "BIBREF33"
},
{
"start": 269,
"end": 289,
"text": "Nguyen et al., 2015)",
"ref_id": "BIBREF29"
},
{
"start": 313,
"end": 338,
"text": "(Nguyen and Shirai, 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At the same time, deep learning has demonstrated its effectiveness on a plethora of NLP tasks such as Question Answering (QA) (Severyn and Moschitti, 2015a; Rao et al., 2016) , and parsing (Andor et al., 2016) , to name a few. Deep learning models (DLMs) usually do not include traditional features; they extract relevant signals from distributed representations of words, by applying a sequence of linear and non linear functions to the input. Word representations are learned from large corpora, or directly from the training data of the task at hand.",
"cite_spans": [
{
"start": 126,
"end": 156,
"text": "(Severyn and Moschitti, 2015a;",
"ref_id": "BIBREF36"
},
{
"start": 157,
"end": 174,
"text": "Rao et al., 2016)",
"ref_id": "BIBREF34"
},
{
"start": 189,
"end": 209,
"text": "(Andor et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Clearly, joining the two approaches above would have the advantage of easily integrating structures with kernels, and lexical representations with embeddings into learning algorithms. In this respect, the Smoothed Partial Tree Kernel (SPTK) is a noticeable approach for using lexical similarity in tree structures (Croce et al., 2011) . SPTK can match different tree fragments, provided that they only differ in lexical nodes. Although the results were excellent, the used similarity did not consider the fact that words in context assume different meanings or weights for the final task, i.e., it does not consider the context. In contrast, SPTK would benefit to use specific word similarity when matching subtrees corresponding to different constituency. For example, the two questions: -What famous model was married to Billy Joel? -What famous model of the Universe was proposed?",
"cite_spans": [
{
"start": 314,
"end": 334,
"text": "(Croce et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "are similar in terms of structures and words but clearly have different meaning and also different categories: the first asks for a human (the answer is Christie Brinkley) whereas the latter asks for an entity (an answer could be the Expanding Universe). To determine that such questions are not similar, SPTK would need different embeddings for the word model in the two contexts, i.e., those related to person and science, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we use distributed representations generated by neural approaches for computing the lexical similarity in TKs. We carry out an extensive comparison between different methods, i.e., word2vec, using CBOW and SkipGram, and Glove, in terms of their impact on convolution semantic TKs for question classification (QC). We experimented with composing word vectors and alternative embedding methods for bigger unit of text to obtain context specific vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, the study above showed that standard ways to model context are not effective. Thus, we propose a novel application of Siamese Networks to learn word vectors in context, i.e., a representation of a word conditioned on the other words in the sentence. Since a comprehensive and large enough corpus of disambiguated senses is not available, we approximate them with categorical information: we derive a classification task that consists in deciding if two words extracted from two sentences belong to the same sentence category. We use the obtained contextual word representations in TKs. Our new approach tested on two tasks, question and sentiment classification, shows that modeling the context further improves the semantic kernel accuracy compared to only using standard word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Distributed word representations are an effective and compact way to represent text and are widely used in neural network models for NLP. The research community has also studied them in the context of many other machine learning models, where they are typically used as features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "SPTK is an interesting kernel algorithm that can compute word to word similarity with embeddings (Croce et al., 2011; Filice et al., 2015 Filice et al., , 2016 . In our work, we go beyond simple word similarity and improve the modeling power of SPTK using contextual information in word representations. Our approach mixes the syntactic and semantic features automatically extracted by the TK, with representations learned with deep learning models (DLMs).",
"cite_spans": [
{
"start": 97,
"end": 117,
"text": "(Croce et al., 2011;",
"ref_id": "BIBREF5"
},
{
"start": 118,
"end": 137,
"text": "Filice et al., 2015",
"ref_id": "BIBREF10"
},
{
"start": 138,
"end": 159,
"text": "Filice et al., , 2016",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Early attempts to incorporate syntactic information in DLMs use grammatical relations to guide the composition of word embeddings, and recursively compose the resulting substructural embeddings with parametrized functions. In Socher et al. (2012) and Socher et al. (2013) , a parse tree is used to guide the composition of word embeddings, focusing on a single parametrized function for composing all words according to different grammatical relations. In Tai et al. (2015) , several LSTM architectures that follow an order determined by syntax are presented. Considering embeddings only, Levy and Goldberg (2014) proposed to learn word representations that incorporate syntax from dependency-based contexts. In contrast, we inject syntactic information by means of TKs, which establish a hard match between tree fragments, while the soft match is enabled by the similarities of distributed representations.",
"cite_spans": [
{
"start": 226,
"end": 246,
"text": "Socher et al. (2012)",
"ref_id": "BIBREF40"
},
{
"start": 251,
"end": 271,
"text": "Socher et al. (2013)",
"ref_id": "BIBREF41"
},
{
"start": 456,
"end": 473,
"text": "Tai et al. (2015)",
"ref_id": "BIBREF42"
},
{
"start": 589,
"end": 613,
"text": "Levy and Goldberg (2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "DLMs have been applied to the QC task. Convolutional neural neworks are explored in Kalchbrenner et al. (2014) and Kim (2014) . In Ma et al. (2015) , convolutions are guided by dependencies linking question words, but it is not clear how the word vectors are initialized. In our case, we only use pre-trained word vectors and the output of a parser, avoiding intensive manual feature engineering, as in Silva et al. (2010) . The accuracy of these models are reported in Tab. 1 and can be compared to our QC results (Table 4 ) on the commonly used test set. In addition, we report our results in a cross-validation setting to better assess the generalization capabilities of the models.",
"cite_spans": [
{
"start": 84,
"end": 110,
"text": "Kalchbrenner et al. (2014)",
"ref_id": "BIBREF14"
},
{
"start": 115,
"end": 125,
"text": "Kim (2014)",
"ref_id": "BIBREF15"
},
{
"start": 131,
"end": 147,
"text": "Ma et al. (2015)",
"ref_id": "BIBREF20"
},
{
"start": 403,
"end": 422,
"text": "Silva et al. (2010)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 515,
"end": 523,
"text": "(Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To encode words in context, we employ a Siamese Network, a DLM that has been widely used to model sentence similarity. In a Siamese setting, the same network is used to encode two sentences, and during learning, the distance between the representations of similar sentences is minimized. In Mueller and Thyagarajan (2016) , an LSTM is used to encode similar sentences, and their Manhattan distance is minimized. In Neculoiu et al. (2016), a character level bidirectional LSTM is used to determine the similarity between job titles. In Tan et al. (2016) , the problem of question/answer matching is treated as a similarity task, and convolutions and pooling on top of LSTM states are used to extract the sentence representations. The paper reports also experiments that include neural attention. Those mechanisms are excluded in our work, since we do not want to break the symmetry of the encoding model.",
"cite_spans": [
{
"start": 291,
"end": 321,
"text": "Mueller and Thyagarajan (2016)",
"ref_id": "BIBREF24"
},
{
"start": 535,
"end": 552,
"text": "Tan et al. (2016)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In Siamese Networks, the similarity is typically computed between pair of sentences. In our work, we compute the similarity of word representations extracted from the states of a recurrent network. Such representations still depend on the entire sentence, and thus encode contextual information. Table 1 : QC accuracy (%) and description of SVM (Silva et al., 2010) , DCNN (Kalchbrenner et al., 2014) , CNNns (Kim, 2014) , DepCNN, (Ma et al., 2015) and SPTK (Croce et al., 2011) models.",
"cite_spans": [
{
"start": 345,
"end": 365,
"text": "(Silva et al., 2010)",
"ref_id": "BIBREF39"
},
{
"start": 373,
"end": 400,
"text": "(Kalchbrenner et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 409,
"end": 420,
"text": "(Kim, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 431,
"end": 448,
"text": "(Ma et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 458,
"end": 478,
"text": "(Croce et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 296,
"end": 303,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "tively encode lexical, syntactic and semantic information in learning algorithms. For this purpose, they count the number of substructures shared by two trees. In most TKs, two tree fragments match if they are identical. In contrast, Croce et al. (2011) proposed the Smoothed Partial Tree Kernel (SPTK), which can also match fragments differing in node labels. For example, consider two constituency tree fragments which differ only for one lexical node. SPTK can establish a soft match between the two fragments by associating the lexicals with vectors and by computing the cosine similarity between the latter. In previous work for QC, vectors were obtained by applying Latent Semantic Analysis (LSA) to a large corpus of textual documents. We use neural word embeddings as in Filice et al. (2015) to encode words. Differently from them, we explore specific embeddings by also deriving a vector representation for the context around each word. Finally, we define a new approach based on the category of the sentence of the target word.",
"cite_spans": [
{
"start": 234,
"end": 253,
"text": "Croce et al. (2011)",
"ref_id": "BIBREF5"
},
{
"start": 779,
"end": 799,
"text": "Filice et al. (2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernels-based Lexical Similarity",
"sec_num": "3"
},
{
"text": "SPTK can be defined as follows: let the set F = {f 1 , f 2 , . . . , f |F | } be a tree fragment space and \u03c7 i (n) be an indicator function, equal to 1 if the target f i is rooted at node n, and equal to 0 otherwise. A TK function over T 1 and T 2 is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Partial Tree Kernel",
"sec_num": "3.1"
},
{
"text": "T K(T 1 , T 2 ) = n 1 \u2208N T 1 n 2 \u2208N T 2 \u2206(n 1 , n 2 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Partial Tree Kernel",
"sec_num": "3.1"
},
{
"text": "where N T 1 and N T 2 are the sets of nodes of T 1 and T 2 , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Partial Tree Kernel",
"sec_num": "3.1"
},
{
"text": "\u2206(n 1 , n 2 ) = |F | i=1 \u03c7 i (n 1 )\u03c7 i (n 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Partial Tree Kernel",
"sec_num": "3.1"
},
{
"text": ". The latter is equal to the number of common fragments rooted in the n 1 and n 2 nodes. The \u2206 function for SPTK 1 defines a rich kernel space as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Partial Tree Kernel",
"sec_num": "3.1"
},
{
"text": "1. If n 1 and n 2 are leaves then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Partial Tree Kernel",
"sec_num": "3.1"
},
{
"text": "\u2206 \u03c3 (n 1 , n 2 ) = \u00b5\u03bb\u03c3(n 1 , n 2 ); else 2. \u2206 \u03c3 (n 1 , n 2 ) = \u00b5\u03c3(n 1 , n 2 ) \u00d7 \u03bb 2 + I 1 , I 2 ,l( I 1 )=l( I 2 ) \u03bb d( I 1 )+d( I 2 ) l( I 1 ) j=1 \u2206 \u03c3 (c n 1 ( I 1j ), c n 2 ( I 2j )) , (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Partial Tree Kernel",
"sec_num": "3.1"
},
{
"text": "where \u03c3 is any similarity between nodes, e.g., between their lexical labels, \u00b5, \u03bb \u2208 [0, 1] are two decay factors, I 1 and I 2 are two sequences of indices, which index subsequences of children u,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Partial Tree Kernel",
"sec_num": "3.1"
},
{
"text": "I = (i 1 , ..., i |u| ), in sequences of children s, 1 \u2264 i1 < ... < i |u| \u2264 |s|, i.e., such that u = s i 1 ..s i |u| , and d( I) = i |u| \u2212 i 1 + 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Partial Tree Kernel",
"sec_num": "3.1"
},
{
"text": "is the distance between the first and last child. c is one of the children of the node n, also indexed by I. SPTK has been shown to be rather efficient in practice (Croce et al., 2011 (Croce et al., , 2012 .",
"cite_spans": [
{
"start": 164,
"end": 183,
"text": "(Croce et al., 2011",
"ref_id": "BIBREF5"
},
{
"start": 184,
"end": 205,
"text": "(Croce et al., , 2012",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Partial Tree Kernel",
"sec_num": "3.1"
},
{
"text": "Syntactic and semantic structures can play an important role in building effective representations for machine learning algorithms. The automatic extraction of features from tree structured representations of text is natural within the TK framework. Therefore, several studies have shown the power of associating rich structural encoding with TKs (Severyn et al., 2013; Tymoshenko and Moschitti, 2015) . In Croce et al. 2011, a wide array of representations derived from the parse tree of a sentence are evaluated. The Lexical Centered Tree (LCT) is shown to be the best performing tree layout for the QC task. An LCT, as shown in Figure 1 , contains lexicals at the pre-terminal levels, and their grammatical functions and POS-tags are added as leftmost children. In addition, each lexical node is encoded as a word lemma, and has a suffix which is composed by a special :: symbol and the first letter of the POS-tag of the word. These marked lexical nodes are then mapped to their corresponding numerical vectors, which are used in the kernel computation. Only lemmas sharing the same POStag are compared in the semantic kernel similarity.",
"cite_spans": [
{
"start": 347,
"end": 369,
"text": "(Severyn et al., 2013;",
"ref_id": "BIBREF38"
},
{
"start": 370,
"end": 401,
"text": "Tymoshenko and Moschitti, 2015)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [
{
"start": 631,
"end": 639,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Structural representation for text",
"sec_num": "3.2"
},
{
"text": "We propose to compute the similarity function \u03c3 in SPTK as the cosine similarity of word embeddings obtained with neural networks. We experimented with the popular Continuous Bag-Of-Words (CBOW), SkipGram models (Mikolov et al., 2013) , and GloVe (Pennington et al., 2014) .",
"cite_spans": [
{
"start": 212,
"end": 234,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF21"
},
{
"start": 247,
"end": 272,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Word Embeddings for SPTK",
"sec_num": "4"
},
{
"text": "As in (Croce et al., 2011) , we observed that embeddings learned from raw words are not the most effective in the TK computation. Thus, similarly to Trask et al. 2015, we attach a special :: suffix plus the first letter of the part-of-speech (POS) to the word lemmas. This way, we differentiate words by their tags, and learn specific embedding vectors for each of them. This approach increases the performance of our models.",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "(Croce et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-speech tags in word embeddings",
"sec_num": "4.1"
},
{
"text": "Although a word vector encodes some information about word co-occurrences, the context around a word, as also suggested in Iacobacci et al. (2016) , can explicitly contribute to the word similarity, especially when the target words are infrequent. For this reason, we also represent each word as the concatenation of its embedding with a second vector, which is supposed to model the context around the word. We build this vector as (i) a simple average of the embeddings of the other words in the sentence, and (ii) with a method specifically designed to embed longer units of text, namely para-graph2vec (Le and Mikolov, 2014) . This is similar to word2vec: a network is trained to predict a word given its context, but it can access to an additional vector specific for the paragraph, where the word and the context are sampled.",
"cite_spans": [
{
"start": 123,
"end": 146,
"text": "Iacobacci et al. (2016)",
"ref_id": "BIBREF12"
},
{
"start": 614,
"end": 628,
"text": "Mikolov, 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling the word context",
"sec_num": "4.2"
},
{
"text": "As described in Sec. 2, a Siamese Network encodes two inputs into a vectorial representation, reusing the network parameters. In this section, we briefly describe the standard units used in our Siamese Network to encode sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Networks for Encoding Text",
"sec_num": "5"
},
{
"text": "Recurrent Neural Networks (RNNs) constitute one of the main architectures used to model sequences, and they have seen a wide adoption in the NLP literature. Vanilla RNNs consume a sequence of vectors one step at the time, and update their internal state as a function of the new input and their previous internal state. For this reason, at any given step, the internal state depends on the entire history of previous states. These networks suffer from the vanishing gradient problem (Bengio et al., 1994) , which is mitigated by a popular RNN variant, the Long Short Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997 ). An LSTM can control the amount of information from the input that affects its internal state, the amount of information in the internal state that can be forgotten, and how the internal state affects the output of the network.",
"cite_spans": [
{
"start": 483,
"end": 504,
"text": "(Bengio et al., 1994)",
"ref_id": "BIBREF1"
},
{
"start": 594,
"end": 627,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent neural network units",
"sec_num": "5.1"
},
{
"text": "The Gated Recurrent Unit (GRU) (Chung et al., 2014) is an LSTM variant with similar performance and less parameters, thus faster to train. Since we use this recurrent unit in our model, we briefly review it. Let x t and s t be the input vector and state at timestep t, given a sequence of input vectors (x 1 , ..., x T ), the GRU computes a sequence of states (s 1 , ..., s T ) according to the following equations:",
"cite_spans": [
{
"start": 31,
"end": 51,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent neural network units",
"sec_num": "5.1"
},
{
"text": "z = \u03c3(x t U z + s t\u22121 W z ) r = \u03c3(x t U r + s t\u22121 W r ) h = tanh(x t U h + (s t\u22121 \u2022 r)W h ) s t = (1 \u2212 z) \u2022 h + z \u2022 s t\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent neural network units",
"sec_num": "5.1"
},
{
"text": "The GRU has an update, z, and reset gate, r, and does not have an internal memory beside the internal state. The U and W matrices are parameters of the model. \u03c3 is the logistic function, the \u2022 operator denotes the elementwise (Hadamard) product, and tanh is the hyperbolic tangent function. All the non-linearities are applied elementwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent neural network units",
"sec_num": "5.1"
},
{
"text": "The aforementioned recurrent units consume the input sequence in one direction, and thus earlier internal states do not have access to future steps. Bidirectional RNNs (Schuster and Paliwal, 1997) solve this issue by keeping a forward and backward internal states that are computed by going through the input sequence in both directions. The state at any given step will be the concatenation of the forward and backward state at that step, and, in our case, will contain useful information from both the left and right context of a word.",
"cite_spans": [
{
"start": 168,
"end": 196,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional networks",
"sec_num": "5.2"
},
{
"text": "The methods to model the context described in Sec. 4.2 augment the target word vector with dimensions derived from the entire sentence. This provides some context that may increase the discriminative power of SPTK. The latter can thus use a similarity between two words dependent on the sentences which they belong to. For example, when SPTK carries out a QC task, the sentences above have higher probability to share similar context if they belong to the same category. Still, this approach is rather shallow as two words of the same sentence would be associated with almost the same context vector. That is, the approach does not really transform the embedding of a given word as a function of its context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Word Similarity Network",
"sec_num": "6"
},
{
"text": "An alternative approach is to train the context embedding using neural networks on a sense annotated corpus, which can remap the word embeddings in a supervised fashion. However, since there are not enough large disambiguated corpora, we need to approximate the word senses with coarse-grained information, e.g., the category of the context. In other words, we can train a network to decide if two target words are sampled from sentences belonging to the same category. This way, the states of the trained network corresponding to each word can be eventually used as word-in-context embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Word Similarity Network",
"sec_num": "6"
},
{
"text": "In the next sections, we present the classification task designed for this purpose, and then the architecture of our Siamese Network for learning contextual word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Word Similarity Network",
"sec_num": "6"
},
{
"text": "The end task that we consider is the categorization of a sentence s \u2208 D = {s 1 , ..., s n } into one class c i \u2208 C = {c 1 , ..., c m }, where D is our collection of n sentences, and C is the set of m sentence categories. Intuitively, we define the derived task as determining if two words extracted from two different sentences share the same sentence category or not. Our classifier learns word representations while accessing to the entire sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining the derived classification task",
"sec_num": "6.1"
},
{
"text": "More formally, we sample a pair of labeled sentences s i , c i , s j , c j from our training set, where i = j. Then, we sample a word from each sen-tence, w a \u2208 s i and w b \u2208 s j , and we assign a label y \u2208 {0, 1} to the word pair. We set y = 0 if c i = c j , and y = 1 if c i = c j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining the derived classification task",
"sec_num": "6.1"
},
{
"text": "Our goal is to learn a mapping f such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining the derived classification task",
"sec_num": "6.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "sim(f (s i , w a ), f (s j , w b )) \u2208 [0, 1],",
"eq_num": "(2)"
}
],
"section": "Defining the derived classification task",
"sec_num": "6.1"
},
{
"text": "where sim is a similarity function between two vectors that should output values close to 1 when y = 1, and values close to 0 when y = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining the derived classification task",
"sec_num": "6.1"
},
{
"text": "To generate sentence pairs, we randomly sample sentences from different categories. Pairs labeled as positive are constructed by randomly sampling sentences from the same category, without replacement. Pairs labeled as negative are constructed by randomly sampling the first sentence from one category, and the second sentence from the remaining categories, again without replacement. Note that we oversample low frequency categories, and sample positive and negative examples several times to collect diverse pairs. We remove duplicates, and stop the generation process at approximately 500,000 sentence pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data construction for the derived task",
"sec_num": "6.2"
},
{
"text": "We model the function f that maps a sentence and one of its words into a fixed size representation as a neural network. We aim at using the f encoder to map different word/sentence pairs into the same embedding space. Since the two input sentences play a symmetric role in our desired similarity and we need to use the same weights for both, we opt for a Siamese architecture (Chopra et al., 2005) . In this setting, the same network is applied to two input instances reusing the weights. Alternatively, the network can be seen as having two branches that share all the parameter weights.",
"cite_spans": [
{
"start": 376,
"end": 397,
"text": "(Chopra et al., 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional GRUs for Word Similarity",
"sec_num": "6.3"
},
{
"text": "The optimization strategy is what differentiates our Siamese Network from others that compute textual similarity. We do not compute the similarity (and thus the loss) between two sentences. Instead, we compute the similarity between the contextual representations of two random words from the two sentences. This is clearly depicted in Fig. 2 . The input words are mapped to integer ids, which are looked up in an embedding matrix to retrieve the corresponding embedding vectors. The sequence of vectors is then consumed by a 3-layer Bidirectional GRU (BiGRU). We selected a BiGRU for our experiments as they are more efficient and accurate than LSTMs for our tasks. We tried other architectures, including convolutional networks, but RNNs gave us better results with less complexity and tuning effort. Note that the weights of the RNNs are shared between the two branches.",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 342,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Bidirectional GRUs for Word Similarity",
"sec_num": "6.3"
},
{
"text": "Each RNN layer produces a state for each word, which is consumed by the next RNN in the stack. From the top layer, the state corresponding to the word in the similarity pair is selected. This state encodes the word given its sentential context. Thus, the first layer, BiGRU , maps the sequence of input vectors (x 1 , ..., x T ), into a sequence of states (s 1 , ..., s T ), the second, BiGRU , transforms those states into (s 1 , ..., s T ), and the third, BiGRU , produces the final representations of the words in context (s 1 , ..., s T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional GRUs for Word Similarity",
"sec_num": "6.3"
},
{
"text": "Eventually, the network computes the similarity of a pair of encoded words, selected from the two sentences. We optimize the cosine similarity to match the similarity function used in SPTK. We rescale the output similarity in the [0, 1] range and train the network to minimize the log loss between predictions and true labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional GRUs for Word Similarity",
"sec_num": "6.3"
},
{
"text": "We compare SPTK models with our tree kernel model using neural word embeddings (NSPTK) on question classification (QC), a central task for question answering, and on sentiment classfication (SC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "Data. The QC dataset (Li and Roth, 2006) contains a set of questions labelled according to a twolayered taxonomy, which describes their expected answer type. The coarse layer maps each question into one of 6 classes: Abbreviation, Description, Entity, Human, Location and Number. Our experimental setting mirrors the setting of the original study: we train on 5,452 questions and test on 500. The SC dataset is the one of SemEval Twitter'13 for message-level polarity classification (Nakov et al., 2013) . The dataset is organized in a training, development and test sets containing respectively 9,728, 1,654 and 3,813 tweets. Each tweet is labeled as positive, neutral or negative. The only preprocessing step we perform on tweets is to replace user mentions and url with a <USER> and <URL> token, respectively.",
"cite_spans": [
{
"start": 21,
"end": 40,
"text": "(Li and Roth, 2006)",
"ref_id": "BIBREF19"
},
{
"start": 483,
"end": 503,
"text": "(Nakov et al., 2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "7.1"
},
{
"text": "In the cross-validation experiments, we use the training data to produce the training and test folds, whereas we use the original test set as our validation set for tuning the parameters of the network. Word embeddings. Learning high quality word embeddings requires large textual corpora. We train all the vectors for QC on the ukWaC corpus (Ferraresi et al., 2008) , also used in Croce et al. (2011) to obtain LSA vectors. The corpus includes an annotation layer produced with Tree-Tagger 2 . We process the documents by attaching the POS-tag marker to each lemma. We trained paragraph2vec vectors using the Gensim 3 toolkit. Word embeddings for the SC task are learned on a corpus of 50M English tweets collected from the Twitter API over two months, using word2vec and setting the dimension to 100. Neural model. We use GloVe word embeddings (300 dimensions), and we fix them during training. Embeddings for words that are not present in The size of the forward and backward states of the BiGRUs is set to 100, so the resulting concatenated state has 200 dimensions. The number of stacked bidirectional networks is three and it was tuned on a development set. This allows the network to have high capacity, fit the data, and have the best generalization ability. The final layer learns higher order representations of the words in context. We did not use dropout as a regularization mechanism since it did not show a significant difference on the performance of the network. The network parameters are trained using the Adam optimizer (Kingma and Ba, 2014), with a learning rate of 0.001.",
"cite_spans": [
{
"start": 342,
"end": 366,
"text": "(Ferraresi et al., 2008)",
"ref_id": "BIBREF8"
},
{
"start": 382,
"end": 401,
"text": "Croce et al. (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "7.1"
},
{
"text": "The training examples are fed to the network in mini-batches. The latter are balanced between positive and negative examples by picking 32 pairs of sentences sharing the same category, and 32 pairs of sentences from different categories. Batches of 64 sentences are fed to the network. The number of words sampled from each sentence is fixed to 4, and for this reason the final loss is computed over 256 pairs of words in context, for each mini-batch. The network is then trained for 5 epochs, storing the parameters corresponding to the best registered accuracy on the validation set. Those weights are later loaded and used to encode the words in a sentence by taking their corresponding output states from the last BiGRU unit. Structural models. We trained the tree kernel Dong et al. (2015) 72.8 Table 6 : SC results for NSPTK with word embeddings and the word-in-context embeddings. Runs of selected systems are also reported. models using SVM-Light-TK (Moschitti, 2004) , an SVM-Light extension (Joachims, 1999) with tree kernel support. We modified the software to lookup specific vectors for each word in a sentence. We preprocessed each sentence with the LTH parser 4 and used its output to construct the LCT. We used the parameters for the QC classifiers from Croce et al. (2011) , while we selected them on the Twitter'13 dev. set for the SC task. Table 2 shows the QC accuracy of NSPTK with CBOW, SkipGram and GloVe. The results are reported for vector dimensions (dim) ranging from 50 to 1000, with a fixed window size of 5. The performance for the CBOW hierarchical softmax (hs) and negative sampling (ns), and for the SkipGram hs settings are similar. For the SkipGram ns settings, the accuracy is slightly lower for smaller dimension sizes. GloVe embeddings yield a lower accuracy, which steadily increases with the size of the embeddings. In general, a higher dimension size produces higher accuracy, but also makes the training more expensive. 500 dimensions seem a good trade-off between performance and computational cost.",
"cite_spans": [
{
"start": 776,
"end": 794,
"text": "Dong et al. (2015)",
"ref_id": "BIBREF7"
},
{
"start": 958,
"end": 975,
"text": "(Moschitti, 2004)",
"ref_id": "BIBREF22"
},
{
"start": 1001,
"end": 1017,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF13"
},
{
"start": 1270,
"end": 1289,
"text": "Croce et al. (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 800,
"end": 807,
"text": "Table 6",
"ref_id": null
},
{
"start": 1359,
"end": 1366,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "7.1"
},
{
"text": "To better validate the performance of NSPTK, and since the usual test set may have reached a saturation point, we cross-validate some models. Table 7 : Sample of sentences where NSPTK with word vectors fails, and the BiGRU model produces correct classifications.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Context Embedding Results",
"sec_num": "7.2"
},
{
"text": "We use the training set to perform a 5-fold stratified cross-validation (CV), such that the distribution of labels in each fold is similar. Table 3 shows the cross-validated results for a subset of word embedding models. Neural embeddings seem to give a slightly higher accuracy than LSA. A more substantial performance edge may come from modeling the context, thus we experimented with word embeddings concatenated to context embeddings. Table 4 shows the results of NSPTK using different word encodings. The word and context columns refer to the model used for encoding the word and the context, respectively. These models are word2vec (w2v) and paragraph2vec (p2v). The word2vec vector for the context is produced by averaging the embedding vectors of the other words in the sentence, i.e., excluding the target word. The paragraph2vec model has its own procedure to embed the words in the context. CV results marked with \u2020 are significant with a p-value < 0.005. The cross-validation results reveal that word2vec embeddings without context are a tough baseline to beat, suggesting that standard ways to model the context are not effective.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 439,
"end": 446,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Context Embedding Results",
"sec_num": "7.2"
},
{
"text": "Word Similarity Table 5 shows the results of encoding the words in context using a more sophisticated approach: mapping the word to a representation learned with the Siamese Network that we optimize on the derived classification task presented in Section 6.1. The NSPTK operating on word vectors (best vectors from Table 3 ) concatenated with the wordin-context vectors produced by the stacked Bi-GRU encoder, registers a significant improvement over word vectors alone. In this case, the results marked with \u2020 are significant with a p-value < 0.002. This indicates that the strong similarity contribution coming from word vectors is successfully affected by the word-in-context vectors from the network. The original similarities are thus modulated to be more effective for the final clas-sification task. Another possible advantage of the model is that unknown words, which do not participate in the context average of simpler model, have a potentially more useful representation in the internal states of the network. Table 6 reports the results on the SC task. This experiment shows that incorporating the context in the similarity computation slightly improves the performance of the NSPTK. The real improvement, 12.31 absolute percent points over using word vectors alone, comes from modeling the words in context with the BiGRU encoder, confirming it as an effective strategy to improve the modeling capabilities of NSPTK. Interestingly, our model with a single kernel function and without complex text normalization techniques outperforms a multikernel system (Castellucci et al., 2013) , when the word-incontext embeddings are incorporated. The multikernel system is applied on preprocessed text and includes a Bag-Of-Words Kernel, a Lexical Semantic Kernel, and a Smoothed Partial Tree Kernel. State-of-the-art systems (Dong et al., 2015; Severyn and Moschitti, 2015b) include many lexical and clustering features, sentiment lexicons, and distant supervision techniques. Our approach does not include any of the former.",
"cite_spans": [
{
"start": 1568,
"end": 1594,
"text": "(Castellucci et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 1829,
"end": 1848,
"text": "(Dong et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 1849,
"end": 1878,
"text": "Severyn and Moschitti, 2015b)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 315,
"end": 322,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1021,
"end": 1028,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of our Bidirectional GRU for",
"sec_num": "7.3"
},
{
"text": "An error analysis on the QC task reveals the What questions as the most ambiguous. Table 7 contains some of the successes of the BiGRU model with respect to the model using only word vectors. Those wins can be explained by the effect of the contextual word vectors on the kernel similarity. In Question 1, the meaning of occupation is affected by the presence of a person name. In Question 2, the word level loses its prevalent association with quantities. In questions 3 to 5, the underlined words are a strong indicator of locations/places, and the kernel similarity may be dominated by their corresponding word vectors. BiGRU vectors are instead able to effectively remodulate the kernel similarity and induce a correct classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Wins of the BiGRU model",
"sec_num": "7.5"
},
{
"text": "In this paper, we applied neural network models for learning representations with semantic convolution tree kernels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "We evaluated the main distributional representation methods for computing semantic similarity inside the kernel. In addition, we augmented the vectorial representations of words with information coming from the sentential content. Word vectors alone revealed to be difficult to improve upon. To better model the context, we proposed word-in-context representations extracted from the states of a recurrent neural network. Such network learns to decide if two words are sampled from sentences which share the same category label. The resulting embeddings are able to improve on the selected tasks when used in conjunction with the original word embeddings, by injecting more contextual information for the modulation of the kernel similarity. We show that our approach can improve the accuracy of the convolution semantic tree kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "For a similarity score between 0 and 1, a normalization in the kernel space, i.e.T K(T 1 ,T 2 ) \u221a T K(T 1 ,T 1 )\u00d7T K(T 2 ,T 2 )is applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cis.uni-muenchen.de/ schmid/tools/TreeTagger/ 3 https://radimrehurek.com/gensim/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.cs.lth.se",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported by the EC project CogNet, 671625 (H2020-ICT-2014-2, Research and Innovation action). The first author was supported by the Google Europe Doctoral Fellowship Award 2015. Many thanks to the anonymous reviewers for their valuable suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments.",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Globally Normalized Transition-Based Neural Networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Presta",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2442--2452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliak- sei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Glob- ally Normalized Transition-Based Neural Networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2442-2452. http://www.aclweb.org/anthology/P16-1231.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning Long-term Dependencies with Gradient Descent is Difficult",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Frasconi",
"suffix": ""
}
],
"year": 1994,
"venue": "Trans. Neur. Netw",
"volume": "5",
"issue": "2",
"pages": "157--166",
"other_ids": {
"DOI": [
"10.1109/72.279181"
]
},
"num": null,
"urls": [],
"raw_text": "Y. Bengio, P. Simard, and P. Frasconi. 1994. Learn- ing Long-term Dependencies with Gradient De- scent is Difficult. Trans. Neur. Netw. 5(2):157-166. https://doi.org/10.1109/72.279181.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "UNITOR: Combining Syntactic and Semantic Kernels for Twitter Sentiment Analysis",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Castellucci",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Filice",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation (Se-mEval 2013). Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "369--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giuseppe Castellucci, Simone Filice, Danilo Croce, and Roberto Basili. 2013. UNITOR: Combin- ing Syntactic and Semantic Kernels for Twit- ter Sentiment Analysis. In Second Joint Con- ference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh In- ternational Workshop on Semantic Evaluation (Se- mEval 2013). Association for Computational Lin- guistics, Atlanta, Georgia, USA, pages 369-374. http://www.aclweb.org/anthology/S13-2060.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning a similarity metric discriminatively, with application to face verification",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Raia",
"middle": [],
"last": "Hadsell",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2005,
"venue": "Computer Vision and Pattern Recognition",
"volume": "1",
"issue": "",
"pages": "539--546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. IEEE, volume 1, pages 539-546.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.3555"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555 .",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Structured Lexical Similarity via Convolution Kernels on Dependency Trees",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2011,
"venue": "In In EMNLP. Edinburgh, Scotland",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured Lexical Similar- ity via Convolution Kernels on Dependency Trees. In In EMNLP. Edinburgh, Scotland, UK. http://www.aclweb.org/anthology/D11-1096.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Verb Classification using Distributional Similarity in Syntactic and Semantic Structures",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2012,
"venue": "ACL (1). The Association for Computer Linguistics",
"volume": "",
"issue": "",
"pages": "263--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Croce, Alessandro Moschitti, Roberto Basili, and Martha Palmer. 2012. Verb Classification using Distributional Similarity in Syntactic and Semantic Structures. In ACL (1). The Association for Com- puter Linguistics, pages 263-272.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Splusplus: A Feature-Rich Two-stage Classifier for Sentiment Analysis of Tweets",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yichun",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "515--519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Furu Wei, Yichun Yin, Ming Zhou, and Ke Xu. 2015. Splusplus: A Feature-Rich Two-stage Classifier for Sentiment Analysis of Tweets. In Pro- ceedings of the 9th International Workshop on Se- mantic Evaluation (SemEval 2015). Association for Computational Linguistics, Denver, Colorado, pages 515-519. http://www.aclweb.org/anthology/S15- 2086.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Introducing and evaluating ukWaC, a very large web-derived corpus of English",
"authors": [
{
"first": "Adriano",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "Eros",
"middle": [],
"last": "Zanchetta",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Bernardini",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 4th Web as Corpus Workshop (WAC-4) Can we beat Google",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukWaC, a very large web-derived corpus of English. In Proceedings of the 4th Web as Corpus Workshop (WAC-4) Can we beat Google?. page 47.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "KeLP at SemEval-2016 Task 3: Learning Semantic Relations between Questions and Answers",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Filice",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1116--1123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Filice, Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2016. KeLP at SemEval- 2016 Task 3: Learning Semantic Relations be- tween Questions and Answers. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). Association for Com- putational Linguistics, San Diego, California, pages 1116-1123. http://www.aclweb.org/anthology/S16- 1172.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Structural Representations for Learning Relations between Pairs of Texts",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Filice",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1003--1013",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Filice, Giovanni Da San Martino, and Alessandro Moschitti. 2015. Structural Repre- sentations for Learning Relations between Pairs of Texts. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1003-1013. http://www.aclweb.org/anthology/P15-1097.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Embeddings for Word Sense Disambiguation: An Evaluation Study",
"authors": [
{
"first": "Ignacio",
"middle": [],
"last": "Iacobacci",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "897--907",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for Word Sense Disambiguation: An Evaluation Study. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers). Association for Computa- tional Linguistics, Berlin, Germany, pages 897-907. http://www.aclweb.org/anthology/P16-1085.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Making Large-scale Support Vector Machine Learning Practical",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods",
"volume": "",
"issue": "",
"pages": "169--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1999. Making Large-scale Support Vector Machine Learning Practical. In Bernhard Sch\u00f6lkopf, Christopher J. C. Burges, and Alexan- der J. Smola, editors, Advances in Kernel Methods, MIT Press, Cambridge, MA, USA, pages 169-184. http://dl.acm.org/citation.cfm?id=299094.299104.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Convolutional Neural Network for Modelling Sentences",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "655--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A Convolutional Neural Net- work for Modelling Sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Baltimore, Maryland, pages 655-665. http://www.aclweb.org/anthology/P14-1062.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Convolutional Neural Networks for Sentence Classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP). Association for Com- putational Linguistics, Doha, Qatar, pages 1746- 1751. http://www.aclweb.org/anthology/D14-1181.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adam: A Method for Stochastic Optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. In Proceed- ings of the 3rd International Conference on Learn- ing Representations (ICLR).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distributed Representations of Sentences and Documents",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc V. Le and Tomas Mikolov. 2014. Dis- tributed Representations of Sentences and Documents. CoRR abs/1405.4053.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dependency-Based Word Embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- Based Word Embeddings. In Proceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers). Association for Computational Lin- guistics, Baltimore, Maryland, pages 302-308. http://www.aclweb.org/anthology/P14-2050.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning question classifiers: the role of semantic information",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2006,
"venue": "Natural Language Engineering",
"volume": "12",
"issue": "3",
"pages": "229--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Li and Dan Roth. 2006. Learning question clas- sifiers: the role of semantic information. Natural Language Engineering 12(3):229-249.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dependency-based Convolutional Neural Networks for Sentence Embedding",
"authors": [
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "174--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingbo Ma, Liang Huang, Bowen Zhou, and Bing Xi- ang. 2015. Dependency-based Convolutional Neu- ral Networks for Sentence Embedding. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, pages 174-179. http://www.aclweb.org/anthology/P15-",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. ICLR Workshop .",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Study on Convolution Kernels for Shallow Semantic Parsing",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti. 2004. A Study on Convolution Kernels for Shallow Semantic Parsing. In Proceed- ings of the 42nd Annual Meeting on Association for Computational Linguistics. Association for Compu- tational Linguistics, Stroudsburg, PA, USA, ACL '04.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Efficient Convolution Kernels for Dependency and Constituent Syntactic Trees",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 17th European Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "318--329",
"other_ids": {
"DOI": [
"10.1007/1187184232"
]
},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti. 2006. Efficient Convolution Kernels for Dependency and Constituent Syntac- tic Trees. In Proceedings of the 17th Euro- pean Conference on Machine Learning. Springer- Verlag, Berlin, Heidelberg, ECML'06, pages 318- 329. https://doi.org/10.1007/11871842 32.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Siamese Recurrent Architectures for Learning Sentence Similarity",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Thyagarajan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Mueller and Aditya Thyagarajan. 2016. Siamese Recurrent Architectures for Learning Sentence Similarity. In Proceedings of the",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Thirtieth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI'16",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "2786--2792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thirtieth AAAI Conference on Artificial Intelli- gence. AAAI Press, AAAI'16, pages 2786-2792.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "SemEval-2013 Task 2: Sentiment Analysis in Twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation (Se-mEval 2013). Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "312--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wil- son. 2013. SemEval-2013 Task 2: Sentiment Analysis in Twitter. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh Inter- national Workshop on Semantic Evaluation (Se- mEval 2013). Association for Computational Lin- guistics, Atlanta, Georgia, USA, pages 312-320. http://www.aclweb.org/anthology/S13-2052.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning Text Similarity with Siamese Recurrent Networks",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Neculoiu",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Versteegh",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Rotaru",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "148--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Neculoiu, Maarten Versteegh, and Mihai Rotaru. 2016. Learning Text Similarity with Siamese Recur- rent Networks. In Proceedings of the 1st Workshop on Representation Learning for NLP. Association for Computational Linguistics, Berlin, Germany, pages 148-157. http://anthology.aclweb.org/W16- 1617.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Aspect-Based Sentiment Analysis Using Tree Kernel Based Relation Extraction",
"authors": [],
"year": 2015,
"venue": "Computational Linguistics and Intelligent Text Processing: 16th International Conference, CICLing",
"volume": "",
"issue": "",
"pages": "114--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Hai Nguyen and Kiyoaki Shirai. 2015. Aspect- Based Sentiment Analysis Using Tree Kernel Based Relation Extraction. In Alexander Gelbukh, editor, Computational Linguistics and Intelligent Text Pro- cessing: 16th International Conference, CICLing 2015. Springer International Publishing, Cairo, Egypt, pages 114-125.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Semantic Representations for Domain Adaptation: A Case Study on the Tree Kernel-based Method for Relation Extraction",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Thien Huu Nguyen",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "635--644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen, Barbara Plank, and Ralph Grish- man. 2015. Semantic Representations for Domain Adaptation: A Case Study on the Tree Kernel-based Method for Relation Extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Compu- tational Linguistics, Beijing, China, pages 635-644. http://www.aclweb.org/anthology/P15-1062.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "End-to-End Relation Extraction Using Distant Supervision from External Semantic Repositories",
"authors": [
{
"first": "Truc",
"middle": [],
"last": "Vien",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "277--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Truc Vien T. Nguyen and Alessandro Moschitti. 2011. End-to-End Relation Extraction Using Distant Su- pervision from External Semantic Repositories. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 277- 282. http://www.aclweb.org/anthology/P11-2048.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Convolution Kernels on Constituent, Dependency and Sequential Structures for Relation Extraction",
"authors": [
{
"first": "T",
"middle": [],
"last": "Truc-Vien",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1378--1387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Truc-Vien T. Nguyen, Alessandro Moschitti, and Giuseppe Riccardi. 2009. Convolution Kernels on Constituent, Dependency and Sequential Structures for Relation Extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Association for Computa- tional Linguistics, Singapore, pages 1378-1387. http://www.aclweb.org/anthology/D/D09/D09- 1143.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing. pages 1532-1543. http://www.aclweb.org/anthology/D14-1162.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Embedding Semantic Similarity in Tree Kernels for Domain Adaptation of Relation Extraction",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1498--1507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank and Alessandro Moschitti. 2013. Em- bedding Semantic Similarity in Tree Kernels for Domain Adaptation of Relation Extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers). Association for Compu- tational Linguistics, Sofia, Bulgaria, pages 1498- 1507. http://www.aclweb.org/anthology/P13-1147.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Noise-Contrastive Estimation for Answer Selection with Deep Neural Networks",
"authors": [
{
"first": "Jinfeng",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th ACM International on Conference on Information and Knowledge Management",
"volume": "16",
"issue": "",
"pages": "1913--1916",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinfeng Rao, Hua He, and Jimmy Lin. 2016. Noise- Contrastive Estimation for Answer Selection with Deep Neural Networks. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. ACM, New York, NY, USA, CIKM '16, pages 1913-1916.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kuldip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673-2681.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Learning to rank short text pairs with convolutional deep neural networks",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "373--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015a. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, pages 373-382.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "UNITN: Training Deep Convolutional Neural Network for Twitter Sentiment Classification",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "464--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015b. UNITN: Training Deep Convolutional Neural Net- work for Twitter Sentiment Classification. In Pro- ceedings of the 9th International Workshop on Se- mantic Evaluation (SemEval 2015). Association for Computational Linguistics, Denver, Colorado, pages 464-469. http://www.aclweb.org/anthology/S15- 2079.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Learning Semantic Textual Similarity with Structural Representations",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Nicosia",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the ACL",
"volume": "2",
"issue": "",
"pages": "714--718",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn, Massimo Nicosia, and Alessandro Moschitti. 2013. Learning Semantic Textual Sim- ilarity with Structural Representations. In Pro- ceedings of the 51st Annual Meeting of the ACL (Volume 2: Short Papers). ACL, pages 714-718. http://aclweb.org/anthology/P13-2125.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "From symbolic to subsymbolic information in question classification",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Lu\u00edsa",
"middle": [],
"last": "Coheur",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"Cristina"
],
"last": "Mendes",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Wichert",
"suffix": ""
}
],
"year": 2010,
"venue": "Artificial Intelligence Review",
"volume": "35",
"issue": "2",
"pages": "137--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jo\u00e3o Silva, Lu\u00edsa Coheur, Ana Cristina Mendes, and Andreas Wichert. 2010. From symbolic to sub- symbolic information in question classification. Ar- tificial Intelligence Review 35(2):137-154.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Semantic Compositionality through Recursive Matrix-Vector Spaces",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Brody",
"middle": [],
"last": "Huval",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1201--1211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Brody Huval, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Semantic Compositionality through Recursive Matrix-Vector Spaces. In Proceedings of the 2012 Joint Con- ference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning. Association for Computational Linguistics, Jeju Island, Korea, pages 1201-1211. http://www.aclweb.org/anthology/D12-1110.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Mod- els for Semantic Compositionality Over a Senti- ment Treebank. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, Seattle, Washington, USA, pages 1631-1642. http://www.aclweb.org/anthology/D13-1170.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1556--1566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representa- tions From Tree-Structured Long Short-Term Mem- ory Networks. In Proceedings of the 53rd An- nual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers). Association for Compu- tational Linguistics, Beijing, China, pages 1556- 1566. http://www.aclweb.org/anthology/P15-1150.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Improved Representation Learning for Question Answer Matching",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "464--473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Tan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2016. Improved Representation Learning for Question Answer Matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers). Association for Computa- tional Linguistics, Berlin, Germany, pages 464-473. http://www.aclweb.org/anthology/P16-1044.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "2015. sense2vec -A Fast and Accurate Method for Word Sense Disambiguation In Neural Word Embeddings",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Trask",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Michalak",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Trask, Phil Michalak, and John Liu. 2015. sense2vec -A Fast and Accurate Method for Word Sense Disambiguation In Neu- ral Word Embeddings. CoRR abs/1511.06388. http://arxiv.org/abs/1511.06388.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Assessing the Impact of Syntactic and Semantic Structures for Answer Passages Reranking",
"authors": [
{
"first": "Kateryna",
"middle": [],
"last": "Tymoshenko",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM 2015",
"volume": "",
"issue": "",
"pages": "1451--1460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kateryna Tymoshenko and Alessandro Moschitti. 2015. Assessing the Impact of Syntactic and Se- mantic Structures for Answer Passages Reranking. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Man- agement, CIKM 2015, Melbourne, VIC, Australia, October 19 -23, 2015. pages 1451-1460.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The Lexical Centered Tree (LCT) of the lemmatized sentence: \"What is an annotated bibliography?\".",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "The architecture of the Siamese Network. The network computes sim(f (s1, 3), f (s2, 2)). The word embeddings of each sentence are consumed by a stack of 3 Bidirectional GRUs. The two branches of the network share the parameter weights.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF3": {
"content": "<table><tr><td/><td colspan=\"3\">: QC cross-validation accuracies (%) of NSPTK given</td></tr><tr><td colspan=\"3\">embeddings with the selected dimensionalities.</td><td/></tr><tr><td colspan=\"4\">word context QC test accuracy QC CV accuracy</td></tr><tr><td>w2v</td><td>-</td><td>95.2</td><td>86.48</td></tr><tr><td>w2v</td><td>w2v</td><td>95.4</td><td>86.08 \u2020</td></tr><tr><td>w2v</td><td>p2v</td><td>95.0</td><td>86.46</td></tr><tr><td>p2v</td><td>-</td><td>92.8</td><td>82.65 \u2020</td></tr><tr><td>p2v</td><td>p2v</td><td>93.6</td><td>83.47 \u2020</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF4": {
"content": "<table><tr><td>: QC accuracies for the word embeddings (CBOW</td></tr><tr><td>vectors with 500 dimensions, trained using hierarchical soft-</td></tr><tr><td>max) and paragraph2vec.</td></tr><tr><td>the embedding model are randomly initalized by</td></tr><tr><td>sampling a vector of the same dimension from the</td></tr><tr><td>uniform distribution U [\u22120.25, 0.25].</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF6": {
"content": "<table><tr><td>word</td><td>context</td><td colspan=\"2\">SC F P N 1</td></tr><tr><td>w2v</td><td>-</td><td/><td>48.65</td></tr><tr><td>w2v</td><td>w2v</td><td/><td>51.59</td></tr><tr><td colspan=\"2\">w2v BiGRUs</td><td/><td>60.96</td></tr><tr><td colspan=\"2\">SemEval system</td><td/><td>SC F P N 1</td></tr><tr><td colspan=\"3\">Castellucci et al. (2013)</td><td>58.27</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "QC accuracies for NSPTK, using the word-incontext vector produced by the stacked BiGRU encoder trained with the Siamese Network. Word vectors are trained with CBOW (hs) and have 500 dimensions."
},
"TABREF7": {
"content": "<table><tr><td>Question</td><td>Wrong w2v</td><td>Correct BiGRU</td></tr><tr><td>1) What is the occupation of Nicholas Cage ?</td><td>enty</td><td>hum</td></tr><tr><td colspan=\"2\">2) loc</td><td>desc</td></tr><tr><td>4) What is a virtual IP address?</td><td>loc</td><td>desc</td></tr><tr><td>5) What function does a community's water tower serve?</td><td>loc</td><td>desc</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "What level of government (...) is responsible for dealing with racism? num hum 3) What is the Motto for the State of Maryland?"
}
}
}
}