ACL-OCL / Base_JSON /prefixU /json /U18 /U18-1009.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U18-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:12:02.313603Z"
},
"title": "A Comparative Study of Embedding Models in Predicting the Compositionality of Multiword Expressions",
"authors": [
{
"first": "Navnita",
"middle": [],
"last": "Nandakumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne Victoria",
"location": {
"postCode": "3010",
"country": "Australia"
}
},
"email": "nnandakumar@student.unimelb.edu.au"
},
{
"first": "Bahar",
"middle": [],
"last": "Salehi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne Victoria",
"location": {
"postCode": "3010",
"country": "Australia"
}
},
"email": "salehi.b@unimelb.edu.au"
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne Victoria",
"location": {
"postCode": "3010",
"country": "Australia"
}
},
"email": "tbaldwin@unimelb.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we perform a comparative evaluation of off-the-shelf embedding models over the task of compositionality prediction of multiword expressions (\"MWEs\"). Our experimental results suggest that character-and document-level models do capture some aspects of MWE compositionality and are effective at modelling varying levels of compositionality, but ultimately are not as effective as a simple word2vec baseline. However they have the advantage over word-level models that they do not require token-level identification of MWEs in the training corpus.",
"pdf_parse": {
"paper_id": "U18-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we perform a comparative evaluation of off-the-shelf embedding models over the task of compositionality prediction of multiword expressions (\"MWEs\"). Our experimental results suggest that character-and document-level models do capture some aspects of MWE compositionality and are effective at modelling varying levels of compositionality, but ultimately are not as effective as a simple word2vec baseline. However they have the advantage over word-level models that they do not require token-level identification of MWEs in the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, the study of the semantic idiomaticity of multiword expressions (\"MWEs\": Baldwin and Kim (2010)) has focused on compositionality prediction, a regression task involving the mapping of an MWE onto a continuous scale, representing its compositionality either as a whole or for each of its component words (Reddy et al., 2011; Ramisch et al., 2016; Cordeiro et al., to appear) . In the case of couch potato \"an idler who spends much time on a couch (usually watching television)\", e.g., on a scale of [0, 1] the overall compositionality may be judged to be 0.3, and the compositionality of couch and potato as 0.8 and 0.1, respectively. The main motivation for the study of compositionality is to better understand the semantic of the compound and the semantic relationships between the component words of the MWEs, which has applications in various information retrieval and natural language processing tasks (Venkatapathy and Joshi, 2006; Acosta et al., 2011; Salehi et al., 2015b) .",
"cite_spans": [
{
"start": 320,
"end": 340,
"text": "(Reddy et al., 2011;",
"ref_id": "BIBREF13"
},
{
"start": 341,
"end": 362,
"text": "Ramisch et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 363,
"end": 390,
"text": "Cordeiro et al., to appear)",
"ref_id": null
},
{
"start": 924,
"end": 954,
"text": "(Venkatapathy and Joshi, 2006;",
"ref_id": "BIBREF17"
},
{
"start": 955,
"end": 975,
"text": "Acosta et al., 2011;",
"ref_id": "BIBREF0"
},
{
"start": 976,
"end": 997,
"text": "Salehi et al., 2015b)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Separately, there has been burgeoning interest in learning distributed representations of words and their meanings, starting out with word embeddings (Mikolov et al., 2013; Pennington et al., 2014) and now also involving the study of character-and document-level models (Baroni et al., 2014; Le and Mikolov, 2014; Bojanowski et al., 2017; Conneau et al., 2017) . This work has been applied in part to predicting the compositionality of MWEs (Salehi et al., 2015a; Hakimi Parizi and Cook, 2018) , work that this paper builds on directly, in performing a comparative study of the performance of a range of off-the-shelf representation learning methods over the task of MWE compositionality prediction. Our contributions are as follows: (1) we show that, despite their effectiveness over a range of other tasks, recent off-the-shelf character-and document-level embedding learning methods are inferior to simple word2vec at modelling MWE compositionality; and (2) we demonstrate the utility of using paraphrase data in addition to simple lemmas in predicting MWE compositionality.",
"cite_spans": [
{
"start": 150,
"end": 172,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF9"
},
{
"start": 173,
"end": 197,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 270,
"end": 291,
"text": "(Baroni et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 292,
"end": 313,
"text": "Le and Mikolov, 2014;",
"ref_id": "BIBREF8"
},
{
"start": 314,
"end": 338,
"text": "Bojanowski et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 339,
"end": 360,
"text": "Conneau et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 441,
"end": 463,
"text": "(Salehi et al., 2015a;",
"ref_id": "BIBREF15"
},
{
"start": 464,
"end": 493,
"text": "Hakimi Parizi and Cook, 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The current state-of-the-art in compositionality prediction involves the use of word embeddings (Salehi et al., 2015a) . The vector representations of each component word (e.g. couch and potato) and the overall MWE (e.g. couch potato) are taken as a proxy for their respective meanings, and compositionality of the MWE is then assumed to be proportional to the relative similarity between each of the components and overall MWE embedding. However, word-level embeddings require token-level identification of each MWE in the training corpus, meaning that if the set of MWEs changes, the model needs to be retrained. This limitation led to research on character-level models, since character-level models can implic-itly handle an unbounded vocabulary of component words and MWEs (Hakimi Parizi and Cook, 2018) . There has also been work in the extension of word embeddings to document embeddings that map entire sentences or documents to vectors (Le and Mikolov, 2014; Conneau et al., 2017) .",
"cite_spans": [
{
"start": 96,
"end": 118,
"text": "(Salehi et al., 2015a)",
"ref_id": "BIBREF15"
},
{
"start": 778,
"end": 808,
"text": "(Hakimi Parizi and Cook, 2018)",
"ref_id": "BIBREF6"
},
{
"start": 945,
"end": 967,
"text": "(Le and Mikolov, 2014;",
"ref_id": "BIBREF8"
},
{
"start": 968,
"end": 989,
"text": "Conneau et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We use two character-level embedding models (fastText and ELMo) and two document-level models (doc2vec and infersent) to compare with word-level word2vec, as used in the state-of-theart method of Salehi et al. (2015a) . In each case, we use canonical pre-trained models, with the exception of word2vec, which must be trained over data with appropriate tokenisation to be able to generate MWE embeddings, as it treats words atomically and cannot generate OOV words.",
"cite_spans": [
{
"start": 196,
"end": 217,
"text": "Salehi et al. (2015a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Methods",
"sec_num": "3"
},
{
"text": "Word embeddings are mappings of words to vectors of real numbers. This helps create a more compact (by means of dimensionality reduction) and expressive (by means of contextual similarity) word representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Embeddings",
"sec_num": "3.1"
},
{
"text": "word2vec We trained word2vec (Mikolov et al., 2013) over the latest English Wikipedia dump. 1 We first pre-processed the corpus, removing XML formatting, stop words and punctuation, to generate clean, plain text. We then iterated through 1% of the corpus (following Hakimi Parizi and Cook (2018)) to find every occurrence of each MWE in our datasets and concatenate them, assuming every occurrence of the component words in sequence to be the compound noun (e.g. every couch potato in the corpus becomes couchpotato). We do this because instead of a single embedding for the MWE, word2vec generates separate embeddings for each of the component words, owing to the space between them. If the model still fails to generate embeddings for either the MWE or its components (due to data sparseness), we assign the MWE a default compositionality score of 0.5 (neutral). In the case of paraphrases, we compute the element-wise average of the embeddings of each of the component words to generate the embedding of the phrase.",
"cite_spans": [
{
"start": 29,
"end": 51,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 92,
"end": 93,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Embeddings",
"sec_num": "3.1"
},
{
"text": "1 Dated 02-Oct-2018, 07:23",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Embeddings",
"sec_num": "3.1"
},
{
"text": "In a character embedding model, the vector for a word is constructed from the character n-grams that compose it. Since character n-grams are shared across words, assuming a closed-world alphabet, 2 these models can generate embeddings for OOV words, as well as words that occur infrequently. The two character-level embedding models we experiment with are fastText (Bojanowski et al., 2017) and ELMo (Peters et al., 2018) , as detailed below.",
"cite_spans": [
{
"start": 365,
"end": 390,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 400,
"end": 421,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Embeddings",
"sec_num": "3.2"
},
{
"text": "fastText We used the 300-dimensional model pre-trained on Common Crawl and Wikipedia using CBOW. fastText assumes that all words are whitespace delimited, so in order to generate a representation for the combined MWE, we remove any spaces and treat it as a fused compound (e.g. couch potato becomes couchpotato). In the case of paraphrases, we use the same word averaging technique as we did in word2vec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Embeddings",
"sec_num": "3.2"
},
{
"text": "ELMo We used the ElmoEmbedder class in Python's allennlp library. 3 The model was pretrained over SNLI and SQuAD, with a dimensionality of 1024.",
"cite_spans": [
{
"start": 66,
"end": 67,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Embeddings",
"sec_num": "3.2"
},
{
"text": "Note that the primary use case of ELMo is to generate embeddings in context, but we are not providing any context in the input, for consistency with the other models. As such, we are knowingly not harnessing the full potential of the model. However, this naive use of ELMo is not inappropriate as the relative compositionality of a compound is often predictable from its component words only, even for novel compounds such as giraffe potato (which has a plausible compositional interpretation, as a potato shaped like a giraffe) vs. couch intelligence (where there is no natural interpretation, suggesting that it may be noncompositional).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Embeddings",
"sec_num": "3.2"
},
{
"text": "Document-level embeddings aim to learn vector representations of documents (sentences or even paragraphs), to generate a representation of its overall content in the form of a fixeddimensionality vector. The two document-level embeddings used in this research are doc2vec (Le and Mikolov, 2014) and infersent (Conneau et al., 2017) , as detailed below.",
"cite_spans": [
{
"start": 272,
"end": 294,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 309,
"end": 331,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Embeddings",
"sec_num": "3.3"
},
{
"text": "doc2vec We used the gensim implementation of doc2vec (Lau and Baldwin, 2016; \u0158eh\u016f\u0159ek and Sojka, 2010) , pretrained on Wikipedia data using the word2vec skip-gram models pretrained on Wikipedia and AP News. 4 infersent We used two versions of infersent of 300 dimensions, using the inbuilt infersent.build vocab k words function to train the model over the 100,000 most popular English words, using: (1) GloVe (Pennington et al., 2014) word embeddings (\"infersent GloVe \"); and (2) fastText word embeddings (\"infersent fastText \").",
"cite_spans": [
{
"start": 62,
"end": 76,
"text": "Baldwin, 2016;",
"ref_id": "BIBREF7"
},
{
"start": 77,
"end": 101,
"text": "\u0158eh\u016f\u0159ek and Sojka, 2010)",
"ref_id": null
},
{
"start": 206,
"end": 207,
"text": "4",
"ref_id": null
},
{
"start": 409,
"end": 434,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level Embeddings",
"sec_num": "3.3"
},
{
"text": "In order to measure the overall compositionality of an MWE, we propose the following three broad approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Compositionality",
"sec_num": "4"
},
{
"text": "Our first approach is to directly compare the embeddings of each of the component nouns with the embedding of the MWE via cosine similarity, in one of two ways: (1) pre-combine the embeddings for the component words via elementwise sum, and compare with the embedding for the MWE (\"Direct pre \"); and (2) compare each individual component word with the embedding for the MWE, and post-hoc combine the scores via a weighted sum (\"Direct post \"). Formally:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Composition",
"sec_num": "4.1"
},
{
"text": "Direct pre = cos(mwe, mwe 1 + mwe 2 ) Direct post =\u03b1 cos(mwe, mwe 1 )+",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Composition",
"sec_num": "4.1"
},
{
"text": "(1 \u2212 \u03b1) cos(mwe, mwe 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Composition",
"sec_num": "4.1"
},
{
"text": "where: mwe, mwe 1 , and mwe 2 are the embeddings for the combined MWE, first component and second component, respectively; 5 mwe 1 +mwe 2 is the element-wise sum of the vectors of each of the component words of the MWE; and \u03b1 \u2208 [0, 1] is a scalar which allows us to vary the weight of the respective components in predicting the compositionality of the compound. The intuition behind both of these methods is that if the MWE appears in similar contexts to its components, then it is compositional.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct Composition",
"sec_num": "4.1"
},
{
"text": "Our second approach is to calculate the similarity of the MWE embedding with that of its paraphrases, assuming that we have access to paraphrase data. 6 We achieve this using the following three formulae:",
"cite_spans": [
{
"start": 151,
"end": 152,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrases",
"sec_num": "4.2"
},
{
"text": "Para first = cos(mwe, para 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrases",
"sec_num": "4.2"
},
{
"text": "Para all pre = cos(mwe,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrases",
"sec_num": "4.2"
},
{
"text": "i para i ) Para all post = 1 N N i=1 cos(mwe, para i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrases",
"sec_num": "4.2"
},
{
"text": "where para 1 and para i denote the embedding for the first (most popular) and i-th paraphrases, respectively. We apply this method to RAMISCH only, since REDDY does not have any paraphrase data (see Section 5.1 for details).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrases",
"sec_num": "4.2"
},
{
"text": "Our final approach (\"Combined\") is based on the combination of the direct composition and paraphrase methods, as follows: of the max operator here to combine the submethods for each of the direct composition and paraphrase methods is that all methods tend to underestimate the compositionality (and empirically, it was superior to taking the mean).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination",
"sec_num": "4.3"
},
{
"text": "We evaluate the models on the following two datasets, which are comprised of 90 English binary noun compounds each, rated for compositionality on a scale of 0 (non-compositional) to 5 (compositional). In each case, we evaluate model performance via the Pearson's correlation coefficient (r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "REDDY This dataset contains scores for the compositionality of the overall MWE, as well as that of each component word (Reddy et al., 2011) ; in this research, we use the overall compositionality score of the MWE only, and ignore the component scores.",
"cite_spans": [
{
"start": 119,
"end": 139,
"text": "(Reddy et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "RAMISCH Similarly to REDDY, this dataset contains scores for the overall compositionality of the MWE as well as the relative compositionality of each of its component words, in addition to paraphrases suggested by the annotators, in decreasing order of popularity (Ramisch et al., 2016) ; in this research, we use the overall compositionality score and paraphrase data only.",
"cite_spans": [
{
"start": 264,
"end": 286,
"text": "(Ramisch et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "The results of the experiments on REDDY and RAMISCH are presented in Tables 1 and 2, 2, for the REDDY and RAMISCH datasets, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 84,
"text": "Tables 1 and 2,",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "The first observation to be made is that none of the pretrained models match the state-of-the-art method based on word2vec, despite the simplicity of the method. ELMo and doc2vec in particular perform worse than expected, suggesting that their ability to model non-compositional language is limited. Recall, however, our comment about using ELMo naively, in not including any context when generating the embeddings for the component words and, more importantly, the overall MWE. The results show that doc2vec performs better when representing paraphrases, and struggles with compounds without sentential context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "In Table 1 , we find Direct post to produce a higher correlation in all cases, with \u03b1 ranging from 0.0 to 0.5, suggesting that the second element (= head) contributes more to the overall compositionality of the MWE than the first element (= modifier); this is borne out in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 273,
"end": 281,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "In Table 2 , on the other hand, we find that, with the exception of ELMo, the \u03b1 values favour the modifier of the MWE over the head (i.e. \u03b1 > 0.5; also seen in Figure 2 ), implying that the former is more significant in predicting the compositionality of the MWE. The reason for the mismatch between the two datasets is not immediately clear, other than the obvious data sparsity.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 160,
"end": 168,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "We also see that the paraphrases achieve a higher correlation across all models, suggesting this is a promising direction for future study. The low \u03b2 values for Combined also confirm that the paraphrase methods have greater predictive power than the direct composition methods. Among the paraphrase experiments, we find that Para all post -the average of the similarities of the MWE with each of its paraphrases -consistently achieves the best results. We hypothesize that the paraphrases provide additional information regarding the compounds that further help determine their compositionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "This paper has investigated the application of a range of embedding generation methods to the task of predicting the compositionality of an MWE, either directly based on the MWE and its component words, or indirectly based on paraphrase data for the MWE. Our results show that modern character-and document-level embedding models are inferior to the simple word2vec approach at the task. We also show that paraphrase data captures valuable data regarding the compositionality of the MWE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Since we have achieved such promising results with the paraphrase data, it might be interesting to consider other possible settings in future tests. While none of the other approaches could outperform word2vec, it is useful to note that they were pretrained and, as such, did not require any manipulation of the training corpus in order to generate vector embeddings of the MWEs. This means they can be applied to new datasets without the need for retraining and are, therefore, more robust.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "In future work, we intend to train the models used in our study on a fixed corpus, to compare their performance in a more controlled setting. We will also do proper tuning of the hyperparameters over held-out data, and plan to experiment with other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Navnita Nandakumar, Bahar Salehi and Timothy Baldwin. 2018. A Comparative Study of Embedding Models in Predicting the Compositionality of Multiword Expressions. In Proceedings of Australasian Language Technology Association Workshop, pages 71\u221276.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Which is a safe assumption for languages with smallscale alphabetic writing systems such as English, but potentially problematic for languages with large orthographies such as Chinese (with over 10k ideograms in common use, and many more rarer characters) or Korean (assuming we treat each Hangul syllable as atomic).3 options file = https://bit.ly/2CInZPV, weight file = https://bit.ly/2PvNqHh",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/jhlau/doc2vec/ blob/master/README.md 5 Noting that all MWEs are binary in our experiments, but equally that the methods generalise trivially to larger MWEs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Each paraphrase shows an interpretation of the compound semantics. e.g. olive oil is \"oil from olive\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Identification and treatment of multiword expressions applied to information retrieval",
"authors": [
{
"first": "Otavio",
"middle": [],
"last": "Acosta",
"suffix": ""
},
{
"first": "Aline",
"middle": [],
"last": "Villavicencio",
"suffix": ""
},
{
"first": "Viviane",
"middle": [],
"last": "Moreira",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop on Multiword Expressions: from Parsing and Generation to the Real World",
"volume": "",
"issue": "",
"pages": "101--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Otavio Acosta, Aline Villavicencio, and Viviane Mor- eira. 2011. Identification and treatment of multi- word expressions applied to information retrieval. In Proceedings of the Workshop on Multiword Ex- pressions: from Parsing and Generation to the Real World. Portland, USA, pages 101-109.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multiword expressions",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
}
],
"year": 2010,
"venue": "Handbook of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin and Su Nam Kim. 2010. Multiword expressions. In Nitin Indurkhya and Fred J. Dam- erau, editors, Handbook of Natural Language Pro- cessing, CRC Press, Boca Raton, USA. 2nd edition.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "238--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014). Baltimore, USA, pages 238-247.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics 5:135-146.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing. pages 670-680.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Do characterlevel neural network language models capture knowledge of multiword expression compositionality?",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Hakimi",
"suffix": ""
},
{
"first": "Parizi",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions",
"volume": "",
"issue": "",
"pages": "185--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Hakimi Parizi and Paul Cook. 2018. Do character- level neural network language models capture knowledge of multiword expression compositional- ity? In Proceedings of the Joint Workshop on Lin- guistic Annotation, Multiword Expressions and Con- structions (LAW-MWE-CxG-2018). pages 185-192.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An empirical evaluation of doc2vec with practical insights into document embedding generation",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Jey",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "78--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau and Timothy Baldwin. 2016. An empiri- cal evaluation of doc2vec with practical insights into document embedding generation. In Proceedings of the 1st Workshop on Representation Learning for NLP. pages 78-86.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proceed- ings of the 31st International Conference on Ma- chine Learning (ICML 2014). Beijing, China, pages 1188-1196.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Workshop at the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In Proceedings of Workshop at the International Conference on Learning Repre- sentations.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP). pages 1532- 1543.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers). pages 2227-2237.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "How naked is the naked truth? a multilingual lexicon of nominal compound compositionality",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Ramisch",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Cordeiro",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Zilio",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Idiart",
"suffix": ""
},
{
"first": "Aline",
"middle": [],
"last": "Villavicencio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "156--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos Ramisch, Silvio Cordeiro, Leonardo Zilio, Marco Idiart, and Aline Villavicencio. 2016. How naked is the naked truth? a multilingual lexicon of nominal compound compositionality. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers). pages 156-161.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An empirical study on compositionality in compound nouns",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "210--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Diana McCarthy, and Suresh Manandhar. 2011. An empirical study on compositionality in compound nouns. In Proceedings of 5th Interna- tional Joint Conference on Natural Language Pro- cessing. pages 210-218.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Software framework for topic modelling with large corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software frame- work for topic modelling with large corpora. In Pro- ceedings of the LREC 2010 Workshop on New Chal- lenges for NLP Frameworks. pages 45-50.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A word embedding approach to predicting the compositionality of multiword expressions",
"authors": [
{
"first": "Bahar",
"middle": [],
"last": "Salehi",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "977--983",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bahar Salehi, Paul Cook, and Timothy Baldwin. 2015a. A word embedding approach to predicting the com- positionality of multiword expressions. In Proceed- ings of the 2015 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 977-983.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The impact of multiword expression compositionality on machine translation evaluation",
"authors": [
{
"first": "Bahar",
"middle": [],
"last": "Salehi",
"suffix": ""
},
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the NAACL HLT 2015 Workshop on Multiword Expressions",
"volume": "",
"issue": "",
"pages": "54--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bahar Salehi, Nitika Mathur, Paul Cook, and Timothy Baldwin. 2015b. The impact of multiword expres- sion compositionality on machine translation eval- uation. In Proceedings of the NAACL HLT 2015 Workshop on Multiword Expressions. Denver, USA, pages 54-59.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Using information about multi-word expressions for the word-alignment task",
"authors": [
{
"first": "Sriram",
"middle": [],
"last": "Venkatapathy",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties",
"volume": "",
"issue": "",
"pages": "20--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sriram Venkatapathy and Aravind K Joshi. 2006. Us- ing information about multi-word expressions for the word-alignment task. In Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties. pages 20-27.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "respectively. In this work, we simplistically present the results for the best \u03b1 and \u03b2 values for each method over a given dataset, meaning we are effectively peaking at our test data. Sensitivity of the \u03b1 hyper-parameter is shown inFigures 1",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Sensitivity analysis of \u03b1 (RAMISCH)",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>: Pearson correlation coefficient for com-</td></tr><tr><td>positionality prediction results on the REDDY</td></tr><tr><td>dataset.</td></tr></table>",
"html": null,
"text": "",
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td colspan=\"2\">Emb. method Direct pre</td><td>Direct post</td><td colspan=\"3\">Para first Para all pre Para all post</td><td>Combined</td></tr><tr><td>word2vec</td><td colspan=\"2\">0.667 0.731 (\u03b1 = 0.7)</td><td>0.714</td><td>0.822</td><td>0.880</td><td>0.880 (\u03b2 = 0.0)</td></tr><tr><td>fastText</td><td>0.395</td><td>0.446 (\u03b1 = 0.7)</td><td>0.569</td><td>0.662</td><td>0.704</td><td>0.704 (\u03b2 = 0.0)</td></tr><tr><td>ELMo</td><td>0.139</td><td>0.295 (\u03b1 = 0.0)</td><td>0.367</td><td>0.642</td><td>0.664</td><td>0.669 (\u03b2 = 0.2)</td></tr><tr><td>doc2vec</td><td>\u22120.146</td><td>0.048 (\u03b1 = 1.0)</td><td>0.405</td><td>0.372</td><td>0.401</td><td>0.419 (\u03b2 = 0.3)</td></tr><tr><td>infersent GloVe</td><td>0.321</td><td>0.427 (\u03b1 = 0.7)</td><td>0.639</td><td>0.704</td><td>0.741</td><td>0.774 (\u03b2 = 0.5)</td></tr><tr><td>infersent fastText</td><td>0.274</td><td>0.380 (\u03b1 = 0.8)</td><td>0.615</td><td>0.781</td><td>0.783</td><td>0.783 (\u03b2 = 0.0)</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Para all post</td><td/></tr><tr><td/><td/><td/><td colspan=\"4\">where \u03b2 \u2208 [0, 1] is a scalar weighting factor to bal-</td></tr><tr><td/><td/><td/><td colspan=\"4\">ance the effects of the two methods. The choice</td></tr></table>",
"html": null,
"text": "Combined =\u03b2 max Direct pre , Direct post +(1 \u2212 \u03b2) max Para first, Para all pre ,",
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table/>",
"html": null,
"text": "Pearson correlation coefficient for compositionality prediction results on the RAMISCH dataset.",
"type_str": "table"
}
}
}
}