| { |
| "paper_id": "J19-1001", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:58:45.097525Z" |
| }, |
| "title": "Unsupervised Compositionality Prediction of Nominal Compounds", |
| "authors": [ |
| { |
| "first": "Silvio", |
| "middle": [], |
| "last": "Cordeiro", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "CNRS", |
| "location": { |
| "region": "LIS" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Aline", |
| "middle": [], |
| "last": "Villavicencio", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Essex and Federal University of Rio", |
| "location": {} |
| }, |
| "email": "alinev@gmail.com" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Idiart", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Federal University of Rio", |
| "location": {} |
| }, |
| "email": "marco.idiart@gmail.com" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "CNRS", |
| "location": { |
| "region": "LIS" |
| } |
| }, |
| "email": "carlos.ramisch@lis-lab.fr" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Nominal compounds such as red wine and nut case display a continuum of compositionality, with varying contributions from the components of the compound to its semantics. This article proposes a framework for compound compositionality prediction using distributional semantic models, evaluating to what extent they capture idiomaticity compared to human judgments. For evaluation, we introduce data sets containing human judgments in three languages: English, French, and Portuguese. The results obtained reveal a high agreement between the models and human predictions, suggesting that they are able to incorporate information about idiomaticity. We also present an in-depth evaluation of various factors that can affect prediction, such as model and corpus parameters and compositionality operations. General crosslingual analyses reveal the impact of morphological variation and corpus size in the ability of the model to predict compositionality, and of a uniform combination of the components for best results.", |
| "pdf_parse": { |
| "paper_id": "J19-1001", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Nominal compounds such as red wine and nut case display a continuum of compositionality, with varying contributions from the components of the compound to its semantics. This article proposes a framework for compound compositionality prediction using distributional semantic models, evaluating to what extent they capture idiomaticity compared to human judgments. For evaluation, we introduce data sets containing human judgments in three languages: English, French, and Portuguese. The results obtained reveal a high agreement between the models and human predictions, suggesting that they are able to incorporate information about idiomaticity. We also present an in-depth evaluation of various factors that can affect prediction, such as model and corpus parameters and compositionality operations. General crosslingual analyses reveal the impact of morphological variation and corpus size in the ability of the model to predict compositionality, and of a uniform combination of the components for best results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "It is a universally acknowledged assumption that the meaning of phrases, expressions, or sentences can be determined by the meanings of their parts and by the rules used to combine them. Part of the appeal of this principle of compositionality 1 is that it implies that a meaning can be assigned even to a new sentence involving an unseen combination of familiar words (Goldberg 2015) . Indeed, for natural language processing (NLP), this is an attractive way of linearly deriving the meaning of larger units from their components, performing the semantic interpretation of any text.", |
| "cite_spans": [ |
| { |
| "start": 369, |
| "end": 384, |
| "text": "(Goldberg 2015)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "For representing the meaning of individual words and their combinations in computational systems, distributional semantic models (DSMs) have been widely used. DSMs are based on Harris' distributional hypothesis that the meaning of a word can be inferred from the context in which it occurs (Harris 1954; Firth 1957) . In DSMs, words are usually represented as vectors that, to some extent, capture cooccurrence patterns in corpora (Lin 1998; Landauer, Foltz, and Laham 1998; ; Baroni, Dinu, and Kruszewski 2014) . Evaluation of DSMs has focused on obtaining accurate semantic representations for words, and state-of-the-art models are already capable of obtaining a high level of agreement with human judgments for predicting synonymy or similarity between words (Freitag et al. 2005; Camacho-Collados, Pilehvar, and Navigli 2015; Lapesa and Evert 2017) and for modeling syntactic and semantic analogies between word pairs (Mikolov, Yih, and Zweig 2013) . These representations for individual words can also be combined to create representations for larger units such as phrases, sentences, and even whole documents, using simple additive and multiplicative vector operations (Mitchell and Lapata 2010; Reddy, McCarthy, and Manandhar 2011; , syntax-based lexical functions (Socher et al. 2012) , or matrix and tensor operations (Baroni and Lenci 2010; Bride, Van de Cruys, and Asher 2015) . However, it is not clear to what extent this approach is adequate in the case of idiomatic multiword expressions (MWEs). MWEs fall into a wide spectrum of compositionality; that is, some MWEs are more compositional (e.g., olive oil) while others are more idiomatic (Sag et al. 2002; Baldwin and Kim 2010) . In the latter case, the meaning of the MWE may not be straightforwardly related to the meanings of its parts, creating a challenge for the principle of compositionality (e.g., snake oil as a product of questionable benefit, not necessarily an oil and certainly not extracted from snakes).", |
| "cite_spans": [ |
| { |
| "start": 290, |
| "end": 303, |
| "text": "(Harris 1954;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 304, |
| "end": 315, |
| "text": "Firth 1957)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 431, |
| "end": 441, |
| "text": "(Lin 1998;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 442, |
| "end": 474, |
| "text": "Landauer, Foltz, and Laham 1998;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 477, |
| "end": 511, |
| "text": "Baroni, Dinu, and Kruszewski 2014)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 763, |
| "end": 784, |
| "text": "(Freitag et al. 2005;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 785, |
| "end": 830, |
| "text": "Camacho-Collados, Pilehvar, and Navigli 2015;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 831, |
| "end": 853, |
| "text": "Lapesa and Evert 2017)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 923, |
| "end": 953, |
| "text": "(Mikolov, Yih, and Zweig 2013)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 1176, |
| "end": 1202, |
| "text": "(Mitchell and Lapata 2010;", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 1203, |
| "end": 1239, |
| "text": "Reddy, McCarthy, and Manandhar 2011;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 1273, |
| "end": 1293, |
| "text": "(Socher et al. 2012)", |
| "ref_id": "BIBREF74" |
| }, |
| { |
| "start": 1328, |
| "end": 1351, |
| "text": "(Baroni and Lenci 2010;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1352, |
| "end": 1388, |
| "text": "Bride, Van de Cruys, and Asher 2015)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1656, |
| "end": 1673, |
| "text": "(Sag et al. 2002;", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 1674, |
| "end": 1695, |
| "text": "Baldwin and Kim 2010)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In this article, we discuss approaches for automatically detecting to what extent the meaning of an MWE can be directly computed from the meanings of its component words, represented using DSMs. We evaluate how accurately DSMs can model the semantics of MWEs with various levels of compositionality compared to human judgments. Since MWEs encompass a large amount of related but distinct phenomena, we focus exclusively on a subcategory of MWEs: nominal compounds. They represent an ideal case study for this work, thanks to their relatively homogeneous syntax (as opposed to other categories of MWEs such as verbal idioms) and their pervasiveness in language. We assume that models able to predict the compositionality of nominal compounds could be generalized to other MWE categories by addressing their variability in future work. Furthermore, to determine to what extent these approaches are also adequate cross-lingually, we evaluate them in three languages: English, French, and Portuguese.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Given that MWEs are frequent in languages (Sag et al. 2002) , identifying idiomaticity and producing accurate semantic representations for compositional and idiomatic cases is of relevance to NLP tasks and applications that involve some form of semantic processing, including semantic parsing (Hwang et al. 2010 ; Jagfeld and van der Plas 2015), word sense disambiguation (Finlayson and Kulkarni 2011; Schneider et al. 2016) , and machine translation (Ren et al. 2009; Carpuat and Diab 2010; Cap et al. 2015; . Moreover, the evaluation of DSMs on tasks involving MWEs, such as compositionality prediction, has the potential to drive their development towards new directions.", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 59, |
| "text": "(Sag et al. 2002)", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 293, |
| "end": 311, |
| "text": "(Hwang et al. 2010", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 372, |
| "end": 401, |
| "text": "(Finlayson and Kulkarni 2011;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 402, |
| "end": 424, |
| "text": "Schneider et al. 2016)", |
| "ref_id": "BIBREF70" |
| }, |
| { |
| "start": 451, |
| "end": 468, |
| "text": "(Ren et al. 2009;", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 469, |
| "end": 491, |
| "text": "Carpuat and Diab 2010;", |
| "ref_id": null |
| }, |
| { |
| "start": 492, |
| "end": 508, |
| "text": "Cap et al. 2015;", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The main hypothesis of our work is that, if the meaning of a compositional nominal compound can be derived from a combination of its parts, this translates in DSMs as similar vectors for a compositional nominal compound and for the combination of the vectors of its parts using some vector operation, that we refer to as composition function. Conversely we can use the lack of similarity between the nominal compound vector representation and a combination of its parts to detect idiomaticity. Furthermore, we hypothesize that accuracy in predicting compositionality depends both on the characteristics of the DSMs used to represent expressions and their components and on the composition function adopted. Therefore, we have built 684 DSMs and performed an extensive evaluation, involving over 9,072 analyses, investigating various types of DSMs, their configurations, the corpora used to train them, and the composition function used to build vectors for expressions. 2 This article is structured as follows. Section 2 presents related work on distributional semantics, compositionality prediction, and nominal compounds. Section 3 presents the data sets created for our evaluation. Section 4 describes the compositionality prediction framework, along with the composition functions which we evaluate. Section 5 specifies the experimental setup (corpora, DSMs, parameters, and evaluation measures). Section 6 presents the overall results of the evaluated models. Sections 7 and 8 evaluate the impact of DSM and corpus parameters, and of composition functions on compositionality prediction. Section 9 discusses system predictions through an error analysis. Section 10 summarizes our conclusions. Appendix A contains a glossary, Appendix B presents extra sanity-check experiments, Appendix C contains the questionnaire used for data collection, and Appendices D, E, and F list the compounds in the data sets.", |
| "cite_spans": [ |
| { |
| "start": 970, |
| "end": 971, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The literature on distributional semantics is extensive (Lin 1998 ; Turney and Pantel 2010; Baroni and Lenci 2010; Mohammad and Hirst 2012) , so we provide only a brief introduction here, underlining their most relevant characteristics to our framework (Section 2.1). Then, we define compositionality prediction and discuss existing approaches, focusing on distributional techniques for multiword expressions (Section 2.2). Our framework is evaluated on nominal compounds, and we discuss their relevant properties (Section 2.3) along with existing data sets for evaluating compositionality prediction (Section 2.4).", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 65, |
| "text": "(Lin 1998", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 92, |
| "end": 114, |
| "text": "Baroni and Lenci 2010;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 115, |
| "end": 139, |
| "text": "Mohammad and Hirst 2012)", |
| "ref_id": "BIBREF50" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "2 This article significantly extends and updates previous publications:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "We consolidate the description of the data sets introduced in and by adding details about data collection, filtering, and results of a thorough analysis studying the correlation between compositionality and related variables.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "We extend the compositionality prediction framework described in by adding and evaluating new composition functions and DSMs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "We extend the evaluation reported in not only by adding Portuguese, but also by evaluating additional parameters: corpus size, composition functions, and new DSMs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "Distributional semantic models (DSMs) use context information to represent the meaning of lexical units as vectors. These vectors are built assuming the distributional hypothesis, whose central idea is that the meaning of a word can be learned based on the contexts where it appears-or, as popularized by Firth (1957) , \"you shall know a word by the company it keeps.\" Formally, a DSM attempts to encode the meaning of each target word w i of a vocabulary V as a vector of real numbers v(w i ) in R |V| . Each component of v(w i ) is a function of the co-occurrence between w i and the other words in the vocabulary (its contexts w c ). This function can be simply a co-occurrence count c(w i , w c ), or some measure of the association between w i and each w c , such as pointwise mutual information (PMI, Church and Hanks [1990] , Lin [1999] ) or positive PMI (PPMI, Baroni, Dinu, and Kruszewski [2014] ; Levy, Goldberg, and Dagan [2015] ).", |
| "cite_spans": [ |
| { |
| "start": 305, |
| "end": 317, |
| "text": "Firth (1957)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 801, |
| "end": 830, |
| "text": "(PMI, Church and Hanks [1990]", |
| "ref_id": null |
| }, |
| { |
| "start": 833, |
| "end": 843, |
| "text": "Lin [1999]", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 869, |
| "end": 904, |
| "text": "Baroni, Dinu, and Kruszewski [2014]", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 907, |
| "end": 939, |
| "text": "Levy, Goldberg, and Dagan [2015]", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In DSMs, co-occurrence can be defined as two words co-occurring in the same document, sentence, or sentence fragment in a corpus. Intrasentential models are often based on a sliding window; that is, a context word w c co-occurs within a certain window of W words around the target w i . Alternatively, co-occurrence can also be based on syntactic relations obtained from parsed corpora, where a context word w c appears within specific syntactic relations with w i (Lin 1998; Pad\u00f3 and Lapata 2007; Lapesa and Evert 2017) .", |
| "cite_spans": [ |
| { |
| "start": 465, |
| "end": 475, |
| "text": "(Lin 1998;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 476, |
| "end": 497, |
| "text": "Pad\u00f3 and Lapata 2007;", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 498, |
| "end": 520, |
| "text": "Lapesa and Evert 2017)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The set of all vectors v(w i ), \u2200w i \u2208 V can be represented as a sparse co-occurrence matrix V \u00d7 V \u2192 R. Given that most word pairs in this matrix co-occur rarely (if ever), a threshold on the number of co-occurrences is often applied to discard irrelevant pairs. Additionally, co-occurrence vectors can be transformed to have a significantly smaller number of dimensions, converting vectors in R |V| into vectors in R d , with d |V|. 3 Two solutions are commonly employed in the literature. The first one consists in using context thresholds, where all target-context pairs that do not belong to the top-d most relevant pairs are discarded (Salehi, Cook, and Baldwin 2014; Padr\u00f3 et al. 2014b) . The second solution consists in applying a dimensionality reduction technique such as singular value decomposition on the co-occurrence matrix where only the d largest singular values are retained (Deerwester et al. 1990) . Similar techniques focus on the factorization of the logarithm of the co-occurrence matrix (Pennington, Socher, and Manning 2014) and on alternative factorizations of the PPMI matrix (Salle, Villavicencio, and Idiart 2016) .", |
| "cite_spans": [ |
| { |
| "start": 640, |
| "end": 672, |
| "text": "(Salehi, Cook, and Baldwin 2014;", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 673, |
| "end": 692, |
| "text": "Padr\u00f3 et al. 2014b)", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 892, |
| "end": 916, |
| "text": "(Deerwester et al. 1990)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1102, |
| "end": 1141, |
| "text": "(Salle, Villavicencio, and Idiart 2016)", |
| "ref_id": "BIBREF68" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Alternatively, DSMs can be constructed by training a neural network to predict target-context relationships. For instance, a network can be trained to predict a target word w i among all possible words in V given as input a window of surrounding context words. This is known as the continuous bag-of-words model. Conversely, the network can try to predict context words for a target word given as input, and this is known as the skip-gram model . In both cases, the network training procedure allows encoding in the hidden layer semantic information about words as a side effect of trying to solve the prediction task. The weight parameters that connect the unity representing w i with the d-dimensional hidden layer are taken as its vector representation v(w i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "There are a number of factors that may influence the ability of a DSM to accurately learn a semantic representation. These include characteristics of the training corpus such as size (Mikolov, Yih, and Zweig 2013) as well as frequency thresholds and filters (Ferret 2013; Padr\u00f3 et al. 2014b) , genre (Lapesa and Evert 2014) , preprocessing Lapata 2003, 2007) , and type of context (window vs. syntactic dependencies) (Agirre et al. 2009; Lapesa and Evert 2017) . Characteristics of the model include the choice of association and similarity measures (Curran and Moens 2002) , dimensionality reduction strategies (Van de Cruys et al. 2012) , and the use of subsampling and negative sampling techniques (Mikolov, Yih, and Zweig 2013) . However, the particular impact of these factors on the quality of the resulting DSM may be heterogeneous and depends on the task and model (Lapesa and Evert 2014) . Because there is no consensus about a single optimal model that works for all tasks, we compare a variety of models (Section 5) to determine which are best suited for our compositionality prediction framework.", |
| "cite_spans": [ |
| { |
| "start": 183, |
| "end": 213, |
| "text": "(Mikolov, Yih, and Zweig 2013)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 258, |
| "end": 271, |
| "text": "(Ferret 2013;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 272, |
| "end": 291, |
| "text": "Padr\u00f3 et al. 2014b)", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 300, |
| "end": 323, |
| "text": "(Lapesa and Evert 2014)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 340, |
| "end": 358, |
| "text": "Lapata 2003, 2007)", |
| "ref_id": null |
| }, |
| { |
| "start": 417, |
| "end": 437, |
| "text": "(Agirre et al. 2009;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 438, |
| "end": 460, |
| "text": "Lapesa and Evert 2017)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 550, |
| "end": 573, |
| "text": "(Curran and Moens 2002)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 612, |
| "end": 638, |
| "text": "(Van de Cruys et al. 2012)", |
| "ref_id": "BIBREF78" |
| }, |
| { |
| "start": 701, |
| "end": 731, |
| "text": "(Mikolov, Yih, and Zweig 2013)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 873, |
| "end": 896, |
| "text": "(Lapesa and Evert 2014)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distributional Semantic Models", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Before adopting the principle of compositionality to determine the meaning of a larger unit, such as a phrase or multiword expression (MWE), it is important to determine whether it is idiomatic or not. 4 This problem, known as compositionality prediction, can be solved using methods that measure directly the extent to which an expression is constructed from a combination of its parts, or indirectly via language-dependent properties of MWEs linked to idiomaticity like the degree of determiner variability and morphological flexibility (Fazly, Cook, and Stevenson 2009; Tsvetkov and Wintner 2012; K\u00f6per and Schulte im Walde 2016) . In this article, we focus on direct prediction methods in order to evaluate the target languages under similar conditions. Nonetheless, this does not exclude the future integration of information used by indirect prediction methods, as a complement to the methods discussed here.", |
| "cite_spans": [ |
| { |
| "start": 202, |
| "end": 203, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 539, |
| "end": 572, |
| "text": "(Fazly, Cook, and Stevenson 2009;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 573, |
| "end": 599, |
| "text": "Tsvetkov and Wintner 2012;", |
| "ref_id": "BIBREF76" |
| }, |
| { |
| "start": 600, |
| "end": 632, |
| "text": "K\u00f6per and Schulte im Walde 2016)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "For direct prediction methods, three ingredients are necessary. First, we need vector representations of single-word meanings, such as those built using DSMs (Section 2.1). Second, we need a mathematical model of how the compositional meaning of a phrase is calculated from the meanings of its parts. Third, we need the compositionality measure itself, which estimates the similarity between the compositionally constructed meaning of a phrase and its observed meaning, derived from corpora. There are a number of alternatives for each of the ingredients, and throughout this article we call a specific choice of the three ingredients a compositionality prediction configuration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Regarding the second ingredient, that is, the mathematical model of compositional meaning, the most natural choice is the additive model (Mitchell and Lapata 2008) . In the additive model, the compositional meaning of a phrase w 1 w 2 . . . w n is calculated as a linear combination of the word vectors of its components:", |
| "cite_spans": [ |
| { |
| "start": 137, |
| "end": 163, |
| "text": "(Mitchell and Lapata 2008)", |
| "ref_id": "BIBREF48" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "i \u03b2 i v(w i ), where v(w i ) is a d-dimensional vector", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "for each word w i , and the \u03b2 i coefficients assign different weights to the representation of each word (Reddy, McCarthy, and Manandhar 2011; Schulte im Walde, M\u00fcller, and Roller 2013; . These weights can capture the asymmetric contribution of each of the components to the semantics of the whole phrase (Bannard, Baldwin, and Lascarides 2003; Reddy, McCarthy, and Manandhar 2011) . For example, in flea market, it is the head (market) that has a clear contribution to the overall meaning, whereas in couch potato it is the modifier (couch).", |
| "cite_spans": [ |
| { |
| "start": 105, |
| "end": 142, |
| "text": "(Reddy, McCarthy, and Manandhar 2011;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 143, |
| "end": 185, |
| "text": "Schulte im Walde, M\u00fcller, and Roller 2013;", |
| "ref_id": "BIBREF73" |
| }, |
| { |
| "start": 305, |
| "end": 344, |
| "text": "(Bannard, Baldwin, and Lascarides 2003;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 345, |
| "end": 381, |
| "text": "Reddy, McCarthy, and Manandhar 2011)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The additive model can be generalized to use a matrix of multiplicative coefficients, which can be estimated through linear regression (Guevara 2011). This model can be further modified to learn polynomial projections of higher degree, with quadratic projections yielding particularly promising results (Yazdani, Farahmand, and Henderson 2015) . These models come with the caveat of being supervised, requiring some amount of pre-annotated data in the target language. Because of these requirements, our study focuses on unsupervised compositionality prediction methods only, based exclusively on automatically POS-tagged and lemmatized monolingual corpora.", |
| "cite_spans": [ |
| { |
| "start": 303, |
| "end": 343, |
| "text": "(Yazdani, Farahmand, and Henderson 2015)", |
| "ref_id": "BIBREF79" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Alternatives to the additive model include the multiplicative model and its variants (Mitchell and Lapata 2008) . However, results suggest that this representation is inferior to the one obtained through the additive model (Reddy, McCarthy, and Manandhar 2011; . Recent work on predicting intracompound semantics also supports that additive models tend to yield better results than multiplicative models (Hartung et al. 2017) .", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 111, |
| "text": "(Mitchell and Lapata 2008)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 223, |
| "end": 260, |
| "text": "(Reddy, McCarthy, and Manandhar 2011;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 404, |
| "end": 425, |
| "text": "(Hartung et al. 2017)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The third ingredient is the measure of similarity between the compositionally constructed vector and its actual corpus-based representation. Cosine similarity is the most commonly used measure for compositionality prediction in the literature (Schone and Jurafsky 2001; Reddy, McCarthy, and Manandhar 2011; Schulte im Walde, M\u00fcller, and Roller 2013; . Alternatively, one can calculate the overlap between the distributional neighbors of the whole phrase and those of the component words (McCarthy, Keller, and Carroll 2003) , or the number of single-word distributional neighbors of the whole phrase (Riedl and Biemann 2015) .", |
| "cite_spans": [ |
| { |
| "start": 243, |
| "end": 269, |
| "text": "(Schone and Jurafsky 2001;", |
| "ref_id": "BIBREF71" |
| }, |
| { |
| "start": 270, |
| "end": 306, |
| "text": "Reddy, McCarthy, and Manandhar 2011;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 307, |
| "end": 349, |
| "text": "Schulte im Walde, M\u00fcller, and Roller 2013;", |
| "ref_id": "BIBREF73" |
| }, |
| { |
| "start": 487, |
| "end": 523, |
| "text": "(McCarthy, Keller, and Carroll 2003)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 600, |
| "end": 624, |
| "text": "(Riedl and Biemann 2015)", |
| "ref_id": "BIBREF61" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Instead of covering compositionality prediction for MWEs in general, we focus on a particular category of phenomena represented by nominal compounds. We define a nominal compound as a syntactically well-formed and conventionalized noun phrase containing two or more content words, whose head is a noun. 5 They are conventionalized (or institutionalized) in the sense that their particular realization is statistically idiosyncratic, and their constituents cannot be replaced by synonyms (Sag et al. 2002; Baldwin and Kim 2010; Farahmand, Smith, and Nivre 2015) . Their semantic interpretation may be straightforwardly compositional, with contributions from both elements (e.g., climate change), partly compositional, with contribution mainly from one of the elements (e.g., grandfather clock), or idiomatic (e.g., cloud nine) (Nakov 2013) .", |
| "cite_spans": [ |
| { |
| "start": 487, |
| "end": 504, |
| "text": "(Sag et al. 2002;", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 505, |
| "end": 526, |
| "text": "Baldwin and Kim 2010;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 527, |
| "end": 560, |
| "text": "Farahmand, Smith, and Nivre 2015)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 826, |
| "end": 838, |
| "text": "(Nakov 2013)", |
| "ref_id": "BIBREF52" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nominal Compounds", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The syntactic realization of nominal compounds varies across languages. In English, they are often expressed as a sequence of two nouns, with the second noun as the syntactic head, modified by the first noun. This is the most frequently annotated POStag pattern in the MWE-annotated DiMSUM English corpus (Schneider et al. 2016) . In French and Portuguese, they often assume the form of adjective-noun or nounadjective pairs, where the adjective modifies the noun. Examples of such constructions include the adjective-noun compound FR petite annonce (lit. small announcement 'classified ad') and the noun-adjective compound PT buraco negro (lit. hole black 'black hole'). 6 Additionally, compounds may also involve prepositions linking the modifier with the head, as in the case of FR cochon d'Inde (lit. pig of India 'guinea pig') and PT dente de leite (lit. tooth of milk 'milk tooth'). Because prepositions are highly polysemous and their representation in DSMs is tricky, we do not include compounds containing prepositions in this article. Hence, we focus on 2-word nominal compounds of the form noun 1 -noun 2 (in English), and noun-adjective and adjective-noun (in the three languages).", |
| "cite_spans": [ |
| { |
| "start": 305, |
| "end": 328, |
| "text": "(Schneider et al. 2016)", |
| "ref_id": "BIBREF70" |
| }, |
| { |
| "start": 672, |
| "end": 673, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nominal Compounds", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Regarding the meaning of nominal compounds, the implicit relation between the components of compositional compounds can be described in terms of free paraphrases involving verbs, such as flu virus as virus that causes/creates flu (Nakov 2008 ), 7 or prepositions, such as olive oil as oil from olives (Lauer 1995) . These implicit relations can often be seen explicitly in the equivalent expressions in other languages (e.g., FR huile d'olive and PT azeite de oliva for EN olive oil).", |
| "cite_spans": [ |
| { |
| "start": 230, |
| "end": 241, |
| "text": "(Nakov 2008", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 301, |
| "end": 313, |
| "text": "(Lauer 1995)", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nominal Compounds", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Alternatively, the meaning of compositional nominal compounds can be described using a closed inventory of relations which make the role of the modifier explicit with respect to the head noun, including syntactic tags such as subject and object, and semantic tags such as instrument and location (Girju et al. 2005) . The degree of compositionality of a nominal compound can also be represented using numerical scores (Section 2.4) to indicate to what extent the component words allow predicting the meaning of the whole (Reddy, McCarthy, and Manandhar 2011; Roller, Schulte im Walde, and Scheible 2013; . The latter is the representation that we adopted in this article.", |
| "cite_spans": [ |
| { |
| "start": 296, |
| "end": 315, |
| "text": "(Girju et al. 2005)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 521, |
| "end": 558, |
| "text": "(Reddy, McCarthy, and Manandhar 2011;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 559, |
| "end": 603, |
| "text": "Roller, Schulte im Walde, and Scheible 2013;", |
| "ref_id": "BIBREF63" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nominal Compounds", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The evaluation of compositionality prediction models can be performed extrinsically or intrinsically. In extrinsic evaluation, compositionality information can be used to decide how a compound should be treated in NLP systems such as machine translation or text simplification. For instance, for machine translation, idiomatic compounds need to be treated as atomic phrases, as current methods of morphological compound processing cannot be applied to them (Stymne, Cancedda, and Ahrenberg 2013; Cap et al. 2015) .", |
| "cite_spans": [ |
| { |
| "start": 457, |
| "end": 495, |
| "text": "(Stymne, Cancedda, and Ahrenberg 2013;", |
| "ref_id": "BIBREF75" |
| }, |
| { |
| "start": 496, |
| "end": 512, |
| "text": "Cap et al. 2015)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Numerical Compositionality Data sets", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Although potentially interesting, extrinsic evaluation is not straightforward, as results may be influenced both by the compositionality prediction model and by the strategy for integration of compositionality information into the NLP system. Therefore, most related work focuses on an intrinsic evaluation, where the compositionality scores produced by a model are compared to a gold standard, usually a data set where nominal compound semantics have been annotated manually. Intrinsic evaluation thus requires the existence of data sets where each nominal compound has one (or several) numerical scores associated with it, indicating its compositionality. Annotations can be provided by expert linguist annotators or by crowdsourcing, often requiring that several annotators judge the same compound to reduce the impact of subjectivity on the scores. Relevant compositionality data sets of this type are listed below, some of which were used in our experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Numerical Compositionality Data sets", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "\u2022 Reddy, McCarthy, and Manandhar (2011) collected judgments for a set of 90 English noun-noun (e.g., zebra crossing) and adjective-noun (e.g., sacred cow) compounds, in terms of three numerical scores: the compositionality of the compound as a whole and the literal contribution of each of its parts individually, using a scale from 0 to 5. The data set was built through crowdsourcing, and the final scores are the average of 30 judgments per compound.This data set will be referred to as Reddy in our experiments.", |
| "cite_spans": [ |
| { |
| "start": 2, |
| "end": 39, |
| "text": "Reddy, McCarthy, and Manandhar (2011)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Numerical Compositionality Data sets", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "\u2022 Farahmand, Smith, and Nivre (2015) collected judgments for 1,042 English noun-noun compounds. Each compound has binary judgments regarding non-compositionality and conventionalization given by four expert annotators (both native and non-native speakers). A hard threshold is applied so that compounds are considered as noncompositional if at least two annotators say so (Yazdani, Farahmand, and Henderson 2015) , and the total compositionality score is given by the sum of the four binary judgments. This data set will be referred to as Farahmand in our experiments.", |
| "cite_spans": [ |
| { |
| "start": 372, |
| "end": 412, |
| "text": "(Yazdani, Farahmand, and Henderson 2015)", |
| "ref_id": "BIBREF79" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Numerical Compositionality Data sets", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "\u2022 Kruszewski and Baroni (2014) built the Norwegian Blue Parrot data set, containing judgments for modifier-head phrases in English. The judgments consider whether the phrase is (1) an instance of the concept denoted by the head (e.g., dead parrot and parrot) and (2) a member of the more general concept that includes the head (e.g., dead parrot and pet), along with typicality ratings, with 5,849 judgments in total. Compounds are judged by multiple annotators, and the final compositionality score is the average across annotators. The data set is also annotated for in-corpus frequency, productivity, and ambiguity, and a subset of 180 compounds has been selected for balancing these variables. The annotations were performed by the authors, linguists, and through crowdsourcing. For the balanced subset of 180 compounds, compositionality annotations were performed by experts only, excluding the authors.", |
| "cite_spans": [ |
| { |
| "start": 2, |
| "end": 30, |
| "text": "Kruszewski and Baroni (2014)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Numerical Compositionality Data sets", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "For a multilingual evaluation, in this work, we construct two data sets, one for French and one for Portuguese compounds, and extend the Reddy data set for English using the same protocol as Reddy, McCarthy, and Manandhar (2011) .", |
| "cite_spans": [ |
| { |
| "start": 191, |
| "end": 228, |
| "text": "Reddy, McCarthy, and Manandhar (2011)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Numerical Compositionality Data sets", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "In Section 3.1, we describe the construction of data sets of 180 compounds for French (FR-comp) and Portuguese (PT-comp). For English, the complete data set contains 280 compounds, of which 190 are new and 90 come from the Reddy data set. We use 180 of these (EN-comp) for cross-lingual comparisons (90 from the original Reddy data set combined with 90 new ones from EN-comp 90 ), and 100 new compounds as held-out data (EN-comp Ext ), to evaluate the robustness of the results obtained (Section 6.3). These data sets containing compositionality scores for 2-word nominal compounds are used to evaluate our framework (Section 4), and we discuss their characteristics in Section 3.2. 8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Creation of a Multilingual Compositionality Data set", |
| "sec_num": "3." |
| }, |
| { |
| "text": "For each of the target languages, we collected, via crowdsourcing, a set of numerical scores corresponding to the level of compositionality of the target nominal compounds. We asked non-expert participants to judge each compound considering three sentences where the compound occurred. After reading the sentences, participants assess the degree to which the meaning of the compound is related to the meanings of its parts. This follows from the assumption that a fully compositional compound will have an interpretation whose meaning stems from both words (e.g., lime tree as a tree of limes), while a fully idiomatic compound will have a meaning that is unrelated to its components (e.g., nut case as an eccentric person).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Our work follows the protocol proposed by Reddy, McCarthy, and Manandhar (2011) , where compositionality is explained in terms of the literality of the individual parts. This type of indirect annotation does not require expert linguistic knowledge, and still provides reliable data, as we show later. For each language, data collection involved four steps: compound selection, sentence selection, questionnaire design, and data aggregation.", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 79, |
| "text": "Reddy, McCarthy, and Manandhar (2011)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Compound Selection. For each data set, we manually selected nominal compounds from dictionaries, corpus searches, and by linguistic introspection, maintaining an equal proportion of compounds that are compositional, partly compositional, and idiomatic. 9 We considered them to be compositional if their semantics are related to both components (e.g., benign tumor), partly compositional if their semantics are related to only one of the components (e.g., grandfather clock), and idiomatic if they are not directly related to either (e.g., old flame). This preclassification was used only to select a balanced set of compounds and was not shown to the participants nor used at any later stage. For all languages, all compounds are required to have a head that is unambiguously a noun, and additionally for French and Portuguese, all compounds have an adjective as modifier.", |
| "cite_spans": [ |
| { |
| "start": 253, |
| "end": 254, |
| "text": "9", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Sentence Selection. Compounds may be polysemous (e.g., FR bras droit may mean most reliable helper or literally right arm). To avoid any potential sense uncertainty, each compound was presented to the participants with the same sense in three sentences. These sentences were manually selected from the WaC corpora: ukWaC (Baroni et al. 2009) , frWaC, and brWaC (Boos, Prestes, and Villavicencio 2014) , presented in detail in Section 5.", |
| "cite_spans": [ |
| { |
| "start": 321, |
| "end": 341, |
| "text": "(Baroni et al. 2009)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 361, |
| "end": 400, |
| "text": "(Boos, Prestes, and Villavicencio 2014)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Questionnaire Design. For each compound, after reading three sentences, participants are asked to:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 provide synonyms for the compound in these sentences. The synonyms are used as additional validation of the quality of the judgments: if unrelated words are provided, the answers are discarded.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 assess the contribution of the head noun to the meaning of the compound (e.g., is a busy bee always literally a bee?)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 assess the contribution of the modifier noun or adjective to the meaning of the compound (e.g., is a busy bee always literally busy?)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 assess the degree to which the compound can be seen as a combination of its parts (e.g., is a busy bee always literally a bee that is busy?)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Participants answer the last three items using a Likert scale from 0 (idiomatic/nonliteral) to 5 (compositional/literal), following Reddy, McCarthy, and Manandhar (2011) . To qualify for the task, participants had to submit demographic information confirming that they are native speakers, and to undergo training in the form of four example questions with annotated answers in an external form (see Appendix C for details).", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 169, |
| "text": "Reddy, McCarthy, and Manandhar (2011)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Data Aggregation. For English and French, we collected answers using Amazon Mechanical Turk (AMT), manually removing answers that were not from native speakers or where the synonyms provided were unrelated to the target compound sense. Because AMT has few Brazilian Portuguese native speakers, we developed an in-house web interface for the questionnaire, which was sent out to Portuguese-speaking NLP mailing lists. For a given compound and question we calculate aggregated scores as the arithmetic averages of all answers across participants. We will refer to these averaged scores as the human compositionality score (hc)s. We average the answers to the three questions independently, generating three scores: hc H for the head noun, hc M for the modifier, and hc HM for the whole compound. In our framework, we try to predict hc HM automatically (Section 5). To assess the variability of the answers (Section 3.2.1), we also calculate the standard deviation across participants for each question (\u03c3 H , \u03c3 M , and \u03c3 HM ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The list of compounds, their translations, glosses, and compositionality scores are given in Appendices D (EN-comp 90 and EN-comp Ext ), E (FR-comp), and F (PT-comp). 10", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Collection", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In this section, we present different measures of agreement among participants (Section 3.2.1) and examine possible correlations between compositionality scores, familiarity, and conventionalization (Section 3.2.2) in the data sets created for this article.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data set Analysis", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To assess the quality of the collected human compositionality scores, we use standard deviation and inter-annotator agreement scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Data set Quality.", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "Standard Deviation ( \u03c3 and P \u03c3>1.5 ) . The standard deviation (\u03c3) of the participants' answers can be used as an indication of their agreement: for each compound and for each of the three questions, small \u03c3 values suggest greater agreement. In addition, if the instructions are clear, \u03c3 can also be seen as an indication of the level of difficulty of the task. In other words, all other things being equal, compounds with larger \u03c3 can be considered intrinsically harder to analyze by the participants. For each data set, we consider two aggregated metrics based on \u03c3:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Data set Quality.", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "\u2022 \u03c3 -The average of \u03c3 in the data set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Data set Quality.", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "\u2022 P \u03c3>1.5 -The proportion of compounds whose \u03c3 is higher than 1.5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measuring Data set Quality.", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "Average number of answers per compound n, average standard deviation \u03c3, proportion of high standard deviation P \u03c3>1.5 , for the compound (HM), head (H), and modifier (M). Table 1 presents the result of these metrics when applied to our in-house data sets, as well as to the original Reddy data set. The column n indicates the average number of answers per compound, while the other six columns present the values of \u03c3 and P \u03c3>1.5 for compound (HM), head-only (H), and modifier-only (M) scores.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 171, |
| "end": 178, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Table 1", |
| "sec_num": null |
| }, |
| { |
| "text": "n \u03c3 HM \u03c3 H \u03c3 M P \u03c3 HM >1.5 P \u03c3 H >1.5 P \u03c3 M >1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data set", |
| "sec_num": null |
| }, |
| { |
| "text": "These values are below what would be expected for random decisions (\u03c3 rand 1.71, for the Likert scale). Although our data sets exhibit higher variability than Reddy, this may be partly due to the application of filters done by Reddy, McCarthy, and Manandhar (2011) to remove outliers. 11 These values could also be due to the collection of fewer answers per compound for some of the data sets. However, there is no clear tendency in the variation of the standard deviation of the answers and the number of participants n. The values of \u03c3 are quite homogeneous, ranging from 1.05 for EN-comp 90 (head) to 1.27 for EN-comp Ext (head). The low agreement for modifiers may be related to a greater variability in semantic relations between modifiers and compounds: these include material (e.g., brass ring), attribute (e.g., black cherry), and time (e.g., night owl).", |
| "cite_spans": [ |
| { |
| "start": 227, |
| "end": 264, |
| "text": "Reddy, McCarthy, and Manandhar (2011)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data set", |
| "sec_num": null |
| }, |
| { |
| "text": "Figure 1(a) shows standard deviation (\u03c3 HM , \u03c3 H , and \u03c3 M ) for each compound of FR-comp as a function of its average compound score hc HM . 12 For all three languages, greater agreement was found for compounds at the extremes of the compositionality scale (fully compositional or fully idiomatic) for all scores. These findings can be partly explained by end-of-scale effects, that result in greater variability for the intermediate scores in the Likert scale (from 1 to 4) that correspond to the partly compositional cases. Hence, we expect that it will be easier to predict the compositionality of idiomatic/compositional compounds than of partly compositional ones.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data set", |
| "sec_num": null |
| }, |
| { |
| "text": "Inter-Annotator Agreement (\u03b1). To measure inter-annotator agreement of multiple participants, taking into account the distance between the ordinal ratings of the Likert scale, we adopt the \u03b1 score (Artstein and Poesio 2008) . The \u03b1 score is more appropriate for ordinal data than traditional agreement scores for categorical data, such as Cohen's and Fleiss' \u03ba (Cohen 1960; Fleiss and Cohen 1973) . However, due to the use of crowdsourcing, most participants rated only a small number of compounds with very limited chance of overlap among them: the average number of answers per participant is 13.6 for EN-comp 90 , 10.2 for EN-comp Ext , 33.7 for FR-comp, and 53.5 for PT-comp. \u03b1 score assumes that each participant rates all the items, we focus on the answers provided by three of the participants, who rated the whole set of 180 compounds in PT-comp.", |
| "cite_spans": [ |
| { |
| "start": 197, |
| "end": 223, |
| "text": "(Artstein and Poesio 2008)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 361, |
| "end": 373, |
| "text": "(Cohen 1960;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 374, |
| "end": 396, |
| "text": "Fleiss and Cohen 1973)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data set", |
| "sec_num": null |
| }, |
| { |
| "text": "Using a linear distance schema between the answers, 13 we obtain an agreement of \u03b1 = .58 for head-only, \u03b1 = .44 for modifier-only, and \u03b1 = .44 for the whole compound. To further assess the difficulty of this task, we also calculate \u03b1 for a single expert annotator, judging the same set of compounds after an interval of one month. The scores were \u03b1 = .69 for the head and \u03b1 = .59 for both the compound and for the modifier. The Spearman correlation between these two annotations performed by the same expert is \u03c1 = 0.77 for hc HM . This can be seen as a qualitative upper bound for automatic compositionality prediction on PT-comp.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data set", |
| "sec_num": null |
| }, |
| { |
| "text": "3.2.2 Compositionality, Familiarity, and Conventionalization. Figure 1 (b) shows the average scores (hc HM , hc H , and hc M ) for the compounds ranked according to the average compound score hc HM . Although this figure is for FR-comp, similar patterns were found for the other data sets. For all three languages, the human compositionality scores provide additional confirmation that the data sets are balanced, with the compound scores (hc HM ) being distributed linearly along the scale. Furthermore, we have calculated the average hc HM values separately for the compounds in each of the three compositionality classes used for compound selection: idiomatic, partly compositional and compositional (Section 3.1). These averages are, respectively, 1.0, 2.4, and 4.0 for EN-comp 90 ; 1.1, 2.4, and 4.2 for EN-comp Ext ; 1.3, 2.7, and 4.3 for FR-comp; and 1.3, 2.5, and 3.9 for PT-comp, indicating that our attempt to select a balanced number of compounds from each class is visible in the collected hc HM scores.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 62, |
| "end": 70, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data set", |
| "sec_num": null |
| }, |
| { |
| "text": "Additionally, the human scores also suggest an asymmetric impact of the non-literal parts over the compound: whenever participants judged an element of the compound as non-literal, the whole compound was also rated as idiomatic. Thus, most head and modifier scores (hc H and hc M ) are close to or above the diagonal line in Figure 1(b) . In other words, a component of the compound is seldom rated as less literal than the compositionality of the whole compound hc HM , although the opposite is more common. Relation between hc H \u2297 hc M and hc HM in FR-comp, using arithmetic and geometric means.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 325, |
| "end": 336, |
| "text": "Figure 1(b)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data set", |
| "sec_num": null |
| }, |
| { |
| "text": "Spearman \u03c1 correlation between compositionality, frequency, and PMI for the three data sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 2", |
| "sec_num": null |
| }, |
| { |
| "text": "Data set frequency PMI", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 2", |
| "sec_num": null |
| }, |
| { |
| "text": "FR-comp 0.598 (p < 10 \u221218 ) 0.164 (p > 0.01) PT-comp 0.109 (p > 0.1) 0.076 (p > 0.1) EN-comp 90 0.305 (p < 10 \u22122 ) \u22120.024 (p > 0.1) EN-comp Ext 0.384 (p < 10 \u22125 ) 0.138 (p > 0.1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 2", |
| "sec_num": null |
| }, |
| { |
| "text": "To evaluate if it is possible to predict hc HM from the hc H and hc M , we calculate the arithmetic and geometric means between hc H and hc M for each compound. Figure 2 shows the linear regression of both measures for FR-comp. The goodness of fit is r 2 arith = .93 for the arithmetic mean, and r 2 geom = .96 for the geometric mean, confirming that they are good predictors of hc HM . 14 Thus, we assume that hc HM summarizes hc H and hc M , and focus on predicting hc HM instead of hc H and hc M separately. These findings also inspired the pc arith and pc geom compositionality prediction functions (Section 4).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 161, |
| "end": 169, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Table 2", |
| "sec_num": null |
| }, |
| { |
| "text": "To examine whether there is an effect of the familiarity of a compound on hc scores, in particular if more idiomatic compounds need to be more familiar, we also calculated the correlation between the compositionality score for a compound hc HM and its frequency in a corpus, as a proxy for familiarity. In this case we used the WaC corpora and calculated the frequencies based on the lemmas. The results, in Table 2 , show a statistically significant positive Spearman correlation of \u03c1 = 0.305 for EN-comp 90 , \u03c1 = 0.384 for EN-comp Ext , and \u03c1 = 0.598 for FR-comp, indicating that, contrary to our expectations, compounds that are more frequent tend to be assigned higher compositionality scores. However, frequency alone is not enough to predict compositionality, and further investigation is needed to determine if compositionality and frequency are also correlated with other factors.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 408, |
| "end": 415, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Table 2", |
| "sec_num": null |
| }, |
| { |
| "text": "We also analyzed the correlation between compositionality and conventionalization to determine if more idiomatic compounds correspond to more conventionalized ones. We use PMI (Church and Hanks 1990) as a measure of conventionalization, as it indicates the strength of association between the components (Farahmand, Smith, and Nivre 2015). We found no statistically significant correlation between compositionality and PMI.", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 199, |
| "text": "(Church and Hanks 1990)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 2", |
| "sec_num": null |
| }, |
| { |
| "text": "We propose a compositionality prediction framework 15 including the following elements: a DSM, created from corpora using existing state-of-the-art models that generate corpus-derived vectors 16 for compounds w 1 w 2 and for their components w 1 and w 2 ; a composition function; and a set of predicted compositionality scores (pc). The framework, shown in Figure 3 , is evaluated by measuring the correlation between the scores predicted by the models (pc) and the human compositionality scores (hc) for the list of compounds in our data sets (Section 3). The predicted compositionality scores are obtained from the cosine similarity between the corpus-derived vector of the compound, v(w 1 w 2 ), and the compositionally constructed vector, v \u03b2 (w 1 , w 2 ):", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 357, |
| "end": 365, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Compositionality Prediction Framework", |
| "sec_num": "4." |
| }, |
| { |
| "text": "pc \u03b2 (w 1 w 2 ) = cos( v(w 1 w 2 ), v \u03b2 (w 1 , w 2 ) ). For v \u03b2 (w 1 , w 2 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction Framework", |
| "sec_num": "4." |
| }, |
| { |
| "text": ", we use the additive model (Mitchell and Lapata 2008) , in which the composition function is a weighted linear combination:", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 54, |
| "text": "(Mitchell and Lapata 2008)", |
| "ref_id": "BIBREF48" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction Framework", |
| "sec_num": "4." |
| }, |
| { |
| "text": "v \u03b2 (w 1 w 2 ) = \u03b2 v(w head ) ||v(w head )|| + (1 \u2212 \u03b2) v(w mod ) ||v(w mod )|| ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction Framework", |
| "sec_num": "4." |
| }, |
| { |
| "text": "where w head (or w mod ) indicates the head (or modifier) of the compound w 1 w 2 , || \u2022 || is the Euclidean norm, and \u03b2 \u2208 [0, 1] is a parameter that controls the relative importance of the head to the compound's compositionally constructed vector. The normalization of both vectors allows taking only their directions into account, regardless of their norms, which are usually proportional to their frequency and irrelevant to meaning. We define six compositionality scores based on pc \u03b2 . Three of them pc head (w 1 w 2 ), pc mod (w 1 w 2 ), and pc uniform (w 1 w 2 ), correspond to different assumptions about how we model compositionality: if dependent on the head (\u03b2 = 1, for e.g., crocodile tears), on the modifier (\u03b2 = 0, for e.g., busy bee), or in equal measure on the head and modifier (\u03b2 = 1/2, for e.g., graduate student). The fourth score is based on the assumption that compositionality may be distributed differently between head and modifier for different compounds. We implement this idea by setting individually for each compound the questionnaires questionnaires value for \u03b2 that yields maximal similarity in the predicted compositionality score, that is: 17", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction Framework", |
| "sec_num": "4." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "w 1 w 2 compound vocabulary (list of compounds) DSM configuration DSM parameters processed corpus v(w 1 w 2 ) v(w 2 ) v(w 1 ) corpusderived vectors v \u03b2 (w 1 ,", |
| "eq_num": "w" |
| } |
| ], |
| "section": "Compositionality Prediction Framework", |
| "sec_num": "4." |
| }, |
| { |
| "text": "pc maxsim (w 1 w 2 ) = max 0\u2264\u03b2\u22641 pc \u03b2 (w 1 w 2 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction Framework", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Two other scores are not based on the additive model and do not require a composition function. Instead, they are based on the intuitive notion that compositionality is related to the average similarity between the compound and its components:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction Framework", |
| "sec_num": "4." |
| }, |
| { |
| "text": "pc avg (w 1 w 2 ) = avg(pc head (w 1 w 2 ), pc mod (w 1 w 2 ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction Framework", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We test two possibilities: the arithmetic mean pc arith (w 1 w 2 ) considers that compositionality is linearly related to the similarity of each component of the compound, whereas the geometric mean pc geom (w 1 w 2 ) reflects the tendency found in human annotations to assign compound scores hc HM closer to the lowest score between that for the head hc H and for the modifier hc M (Section 3.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compositionality Prediction Framework", |
| "sec_num": "4." |
| }, |
| { |
| "text": "This section describes the common setup used for evaluating compositionality prediction, such as corpora (Section 5.1), DSMs (Section 5.2), and evaluation metrics (Section 5.3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5." |
| }, |
| { |
| "text": "In this work we used the lemmatized and POS-tagged versions of the WaC corpora not only for building DSMs, but also as sources of information about the target compounds for the analyses performed (e.g., in Sections 3.2.2, 9.1, and 9.2):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 for English, the ukWaC (Baroni et al. 2009) , with 2.25 billion tokens, parsed with MaltParser (Nivre, Hall, and Nilsson 2006) ;", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 45, |
| "text": "(Baroni et al. 2009)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 97, |
| "end": 128, |
| "text": "(Nivre, Hall, and Nilsson 2006)", |
| "ref_id": "BIBREF53" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 for French, the frWaC with 1.61 billion tokens preprocessed with TreeTagger (Schmid 1995) ; and", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 91, |
| "text": "(Schmid 1995)", |
| "ref_id": "BIBREF69" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 for Brazilian Portuguese, a combination of brWaC (Boos, Prestes, and Villavicencio 2014) , Corpus Brasileiro, 18 and all Wikipedia entries, 19 with a total of 1.91 billion tokens, all parsed with PALAVRAS (Bick 2000) .", |
| "cite_spans": [ |
| { |
| "start": 51, |
| "end": 90, |
| "text": "(Boos, Prestes, and Villavicencio 2014)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 207, |
| "end": 218, |
| "text": "(Bick 2000)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For all compounds contained in our data sets, we transformed their occurrences into single tokens by joining their component words with an underscore (e.g., EN monkey business \u2192 monkey business and FR belle-m\u00e8re \u2192 belle m\u00e8re). 20, 21 To handle POS-tagging and lemmatization irregularities, we retagged the compounds' components using the gold POS and lemma in our data sets (e.g., for EN sitting duck, sit/verb duck/noun\u2192sitting/adjective duck/noun). We also simplified all POS tags using coarse-grained labels (e.g., verb instead of vvz). All forms are then lowercased (surface forms, lemmas, and POS tags); and noisy tokens, with special characters, numbers, or punctuation, are removed. Additionally, ligatures are normalized for French (e.g., oe \u2192 oe) and a spellchecker 22 is applied to normalize words across English spelling variants (e.g., color \u2192 colour).", |
| "cite_spans": [ |
| { |
| "start": 227, |
| "end": 230, |
| "text": "20,", |
| "ref_id": null |
| }, |
| { |
| "start": 231, |
| "end": 233, |
| "text": "21", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To evaluate the influence of preprocessing on compositionality prediction (Section 7.3), we generated four versions of each corpus, with different levels of linguistic information. We expect lemmatization to reduce data sparseness by merging morphologically inflected variants of the same lemma:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "surface + : the original raw corpus with no preprocessing, containing surface forms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "surface: stopword removal, generating a corpus of surface forms of content words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "3. lemma PoS : stopword removal, lemmatization, 23 and POS-tagging; generating a corpus of content words distinguished by POS tags, represented as lemma/POS-tag.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "lemma: stopword removal and lemmatization; generating a corpus containing only lemmas of content words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, we describe the state-of-the-art DSMs used for compositionality prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DSMs", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Positive Pointwise Mutual Information (PPMI). In the models based on the PPMI matrix, the representation of a target word is a vector containing the PPMI association scores between the target and its contexts (Bullinaria and Levy 2012) . The contexts are nouns and verbs, selected in a symmetric sliding window of W words to the left/right and weighted linearly according to their distance D to the target (Levy, Goldberg, and Dagan 2015) . 24 We consider three models that differ in how the contexts are selected:", |
| "cite_spans": [ |
| { |
| "start": 209, |
| "end": 235, |
| "text": "(Bullinaria and Levy 2012)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 406, |
| "end": 438, |
| "text": "(Levy, Goldberg, and Dagan 2015)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 441, |
| "end": 443, |
| "text": "24", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DSMs", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "\u2022 In PPMI-thresh, the vectors are |V|-dimensional but only the top d contexts with highest PPMI scores for each target word are kept, while the others are set to zero (Padr\u00f3 et al. 2014a ). 25", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 186, |
| "text": "(Padr\u00f3 et al. 2014a", |
| "ref_id": "BIBREF56" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DSMs", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In PPMI-TopK, the vectors are d-dimensional, and each of the d dimensions corresponds to a context word taken from a fixed list of k contexts, identical for all target words. We chose k as the 1, 000 most frequent words in the corpus after removing the top 50 most frequent words ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022", |
| "sec_num": null |
| }, |
| { |
| "text": "In PPMI-SVD, singular value decomposition is used to factorize the PPMI matrix and reduce its dimensionality from |V| to d. 26 We set the value of the context distribution smoothing factor to 0.75, and the negative sampling factor to 5 (Levy, Goldberg, and Dagan 2015) . We use the default minimum word count threshold of 5.", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 126, |
| "text": "26", |
| "ref_id": null |
| }, |
| { |
| "start": 236, |
| "end": 268, |
| "text": "(Levy, Goldberg, and Dagan 2015)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022", |
| "sec_num": null |
| }, |
| { |
| "text": "Word2vec (w2v). Word2vec 27 relies on a neural network to predict target/context pairs . We use its two variants: continuous bag-of-words (w2v-cbow) and skip-gram (w2v-sg). We adopt the default configurations recommended in the documentation, except for: no hierarchical softmax, 25 negative samples, frequent-word down-sampling rate of 10 \u22126 , execution of 15 training iterations, and minimum word count threshold of 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022", |
| "sec_num": null |
| }, |
| { |
| "text": "Global Vectors (glove). GloVe 28 implements a factorization of the logarithm of the positional co-occurrence count matrix (Pennington, Socher, and Manning 2014). We adopt the default configurations from the documentation, except for: internal cutoff parameter x max = 75 and processing of the corpus in 15 iterations. For the corpora versions lemma and lemma PoS (Section 5.1), we use the minimum word count threshold of 5. For surface and surface + , due to the larger vocabulary sizes, we use thresholds of 15 and 20. 29 24 In previous work adjectives and adverbs were also included as contexts, but the results obtained with only verbs and nouns were better (Padr\u00f3 et al. 2014a) . 25 Vectors still have |V| dimensions but we use d as a shortcut to represent the fact that we only retain the most relevant target-context pairs for each target word. 26 https://bitbucket.org/omerlevy/hyperwords 27 https://code.google.com/archive/p/word2vec/ 28 https://nlp.stanford.edu/projects/glove/ 29 Thresholds were selected so as to not use more than 128 GB of RAM.", |
| "cite_spans": [ |
| { |
| "start": 520, |
| "end": 522, |
| "text": "29", |
| "ref_id": null |
| }, |
| { |
| "start": 661, |
| "end": 681, |
| "text": "(Padr\u00f3 et al. 2014a)", |
| "ref_id": "BIBREF56" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022", |
| "sec_num": null |
| }, |
| { |
| "text": "Summary of DSMs, their parameters, and evaluated parameter values. The combination of these DSMs and their parameter values leads to 228 DSM configurations evaluated per language (1 \u00d7 1 \u00d7 4 \u00d7 3 = 12 for PPMI-TopK, plus 6 \u00d7 3 \u00d7 4 \u00d7 3 = 216 for the other models). Lexical Vectors (lexvec). The LexVec model 30 factorizes the PPMI matrix in a way that penalizes errors on frequent words (Salle, Villavicencio, and Idiart 2016) . We adopt the default configurations in the documentation, except for: 25 negative samples, subsampling rate of 10 \u22126 , and processing of the corpus in 15 iterations. Due to the vocabulary sizes, we use a word count threshold of 10 for lemma and lemma PoS , and 100 for surface and surface + . 31", |
| "cite_spans": [ |
| { |
| "start": 384, |
| "end": 423, |
| "text": "(Salle, Villavicencio, and Idiart 2016)", |
| "ref_id": "BIBREF68" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 3", |
| "sec_num": null |
| }, |
| { |
| "text": "5.2.1 DSM Parameters. In addition to model-specific parameters, the DSMs described above have some shared DSM parameters. We construct multiple DSM configurations by varying the values of these parameters. These combinations produce a total of 228 DSMs per language (see Table 3 ). In particular, we evaluate the influence of the following parameters on compositionality prediction:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 271, |
| "end": 278, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "DSM", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 WINDOWSIZE: Number of context words to the left/right of the target word when searching for target-context co-occurrence pairs. The assumption is that larger windows are better for capturing semantic relations (Jurafsky and Martin 2009) and may be more suitable for compositionality prediction. We use window sizes of 1+1, 4+4, and 8+8. 32", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 238, |
| "text": "(Jurafsky and Martin 2009)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DSM", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 DIMENSION: Number of dimensions of each vector. The underlying hypothesis is that, the higher the number of dimensions, the more accurate the representation of the context is going to be. We evaluate our framework with vectors of 250, 500, and 750 dimensions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DSM", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 WORDFORM: One of the four word-form and stopword removal variants used to represent a corpus, in Section 5.1: surface + , surface, lemma, and lemma PoS . They represent different levels of specificity in the informational content of the tokens, and may have a language-dependent impact on the performance of compositionality prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DSM", |
| "sec_num": null |
| }, |
| { |
| "text": "To evaluate a compositionality prediction configuration, we calculate Spearman's \u03c1 rank correlation between the predicted compositionality scores (pc)s and the human compositionality scores (hc)s for the compounds that appear in the evaluation data set. We mostly use the rank correlation instead of linear correlation (Pearson) because we are interested in the framework's ability to order compounds from least to most compositional, regardless of the actual predicted values. For English, besides the evaluation data sets presented in Section 3, we also use Reddy and Farahmand (see Section 2.4) to enable comparison with related work. For Farahmand, since it contains binary judgments 33 instead of graded compositionality scores, results are reported using the best F 1 (BF 1 ) score, which is the highest F 1 score found using the top n compounds classified as noncompositional, when n is varied (Yazdani, Farahmand, and Henderson 2015) . For Reddy, we sometimes present Pearson scores to enable comparison with related work.", |
| "cite_spans": [ |
| { |
| "start": 901, |
| "end": 941, |
| "text": "(Yazdani, Farahmand, and Henderson 2015)", |
| "ref_id": "BIBREF79" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Because of the large number of compositionality prediction configurations evaluated, we only report the best performance for each configuration over all possible DSM parameter values. The generalization of these analyses is then ensured using cross-validation and held-out data. To determine whether the difference between two prediction results are statistically different, we use nonparametric Wilcoxon's sign-rank test.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In this section, we present the overall results obtained on the Reddy, Farahmand, ENcomp, FR-comp, and PT-comp data sets, comparing all possible configurations (Section 6.1). To determine their robustness we also report evaluation for all languages using cross-validation (Section 6.2) and for English using the held-out data set EN-comp Ext (Section 6.3). All results reported in this section use the pc uniform function. Table 4 shows the highest overall values obtained for each DSM (columns) on each data set (rows). For English (Reddy, EN-comp, and Farahmand) , the highest results for the compounds found in the corpus were obtained with w2v and PPMI-thresh, shown as the first value in each pair in Table 4 . Not all compounds in the English data sets are present in our corpus. Therefore, we also report results adopting a fallback strategy (the second value). Because its impact depends on the data set, and the relative performance of the models is similar with or without it, for the remainder of the article we discuss only the results without fallback. 34 The best w2v-cbow and w2v-sg configurations are not significantly different from each other, but both are different from PPMI-thresh (p < 0.05). In a direct comparison Table 4 Highest results for each DSM, using BF 1 for Farahmand data set, Pearson r for Reddy (r), and Spearman \u03c1 for all the other data sets. For English, in each pair of values, the first is for the compounds found in the corpus, and the second uses fallback for missing compounds. In short, these results suggest language-dependent trends for DSMs, by which w2v models perform better for the English data sets, and PPMI-thresh for French and Portuguese. While this may be due to the level of morphological inflection in these languages, it may also be due to differences in corpus size or to particular DSM parameters used in each case. In Section 7, we analyze the impact of individual DSM and corpus parameters to better understand this language dependency. Table 4 reports the best configurations for the EN-comp, FR-comp, and PT-comp data sets. However, to determine whether the Spearman scores obtained are robust and generalizable, in this section we report evaluation using cross-validation. For each data set, we partition the 180 compounds into 5 folds of 36 compounds (f 1 , f 2 , . . . , f 5 ). Then, for each fold f i , we exhaustively look for the best configuration (values of WINDOWSIZE, DIMENSION, and WORDFORM) for the union of the other folds (\u222a j =i f j ), and predict the 36 compositionality scores for f i using this configuration. The predicted scores for the 5 folds are then grouped into a single set of predictions, which is evaluated against the 180 human judgments.", |
| "cite_spans": [ |
| { |
| "start": 533, |
| "end": 564, |
| "text": "(Reddy, EN-comp, and Farahmand)", |
| "ref_id": null |
| }, |
| { |
| "start": 1066, |
| "end": 1068, |
| "text": "34", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 423, |
| "end": 430, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 706, |
| "end": 713, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1237, |
| "end": 1244, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1999, |
| "end": 2006, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Overall Results", |
| "sec_num": "6." |
| }, |
| { |
| "text": "The partition of compounds into folds is performed automatically, based on random shuffling. 35 To avoid relying on a single arbitrary fold partition, we run cross-validation 10 times, with different fold partitions each time. This process generates 10 Spearman correlations, for which we calculate the average value and a 95% confidence interval. We have calculated cross-validation scores for a wide range of configurations, focusing on the following DSMs: PPMI-thresh, w2v-cbow, and w2v-sg. Figure 4 presents the average Spearman correlations of cross-validation experiments compared with the best results reported in the previous section, referred to as oracle. In the top left panel the x-axis indicates the DSMs for each language using the best oracle configuration, Figure 4(a) . In the other panels, it indicates the best oracle configuration for a specific DSM and a fixed parameter for a given language. We present only a sample of the results for fixed parameters, as they are stable across languages. Results are presented in ascending order of oracle Spearman correlation. For each oracle datapoint, the associated average Spearman from cross-validation is presented along with the 95% confidence interval.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 95, |
| "text": "35", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 494, |
| "end": 502, |
| "text": "Figure 4", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 773, |
| "end": 784, |
| "text": "Figure 4(a)", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Cross-Validation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "p t _ B R / w 2 v -s g p t _ B R / w 2 v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-Validation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "v -s g / 1 w 2 v -c b o w / 1 w 2 v -c b o w / 4 w 2 v -s g / 4 w 2 v -s g / 8 w 2 v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-Validation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The Spearman correlations obtained through cross-validation are comparable to the ones obtained by the oracle. Moreover, the results are quite stable: increasingly better configurations of oracle tend to be correlated with increasingly better cross-validation scores. Indeed, the Pearson r correlation between the 9 oracle points and the 9 crossvalidation points in the top-left panel is 0.969, attesting to the correlation between crossvalidation and oracle scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-Validation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "For PT-comp, the confidence intervals are quite wide, meaning that prediction quality is sensitive to the choice of compounds used to estimate the best configurations. Probably a larger data set would be required to stabilize cross-validation results. Nonetheless, the other two data sets seem representative enough, so that the small confidence intervals show that, even if we fix the value of a given parameter (e.g., d = 750), the results using cross-validation are stable and very similar to the oracle.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-Validation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The confidence intervals overlapping with oracle data points also indicate that most cross-validation results are not statistically different from the oracle. This suggests that the highest-Spearman oracle configurations could be trusted as reasonable approximations of the best configurations for other data sets collected for the same language constructed using similar guidelines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-Validation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "As an additional test of the robustness of the results obtained, we calculated the performance of the best models obtained for one of the data sets (EN-comp), on a separate held-out data set (EN-comp Ext ). The latter contains 100 compounds balanced for compositionality, not included in EN-comp (that is, not used in any of the preceding experiments). The results obtained on EN-comp Ext are shown in Table 5 . They are comparable and mostly better than those for the oracle and for cross-validation. As the items are different in the two data sets, a direct comparison of the results is not possible, but the equivalent performances confirm the robustness of the models and configurations for compositionality prediction. Moreover, these results are obtained in an unsupervised manner, as the compositionality scores are not used to train any of the models. The scores are used only for comparative purposes for determining the impact of various factors in the ability of these DSMs to predict compositionality.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 402, |
| "end": 409, |
| "text": "Table 5", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation on Held-Out Data", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "In this section, we analyze the influence of DSM parameters on compositionality prediction. We consider different window sizes (Section 7.1), numbers of vector dimensions (Section 7.2), types of corpus preprocessing (Section 7.3), and corpus sizes. For each parameter, we analyze all possible values of other parameters. In other words, we report the best results obtained by fixing a value and considering all possible configurations of other parameters. Results reported in this section use the pc uniform function. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Influence of DSM Parameters", |
| "sec_num": "7." |
| }, |
| { |
| "text": "Best results for each DSM and WINDOWSIZE (1+1, 4+4, and 8+8), using BF 1 for Farahmand, and Spearman \u03c1 for other data sets. Thin bars indicate the use of fallback in English. Differences between the two highest Spearman correlations for each model are statistically significant (p < 0.05), except for PPMI-SVD, according to Wilcoxon's sign-rank test.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5", |
| "sec_num": null |
| }, |
| { |
| "text": "DSMs build the representation of every word based on the frequency of other words that appear in its context. Our hypothesis is that larger window sizes result in higher scores, as the additional data allows a better representation of word-level semantics. However, as some of these models adopt different weight decays for larger windows, 36 variation in their behavior related to window size is to be expected. Contrary to our expectations, for the best models in each language, large windows did not lead to better compositionality prediction. Figure 5 shows the best results obtained for each window size. 37 For English, w2v is the best model, and its performance does not seem to depend much on the size of the window, but with a small trend for smaller sizes to be better. For French and Portuguese, PPMI-thresh is only the best model for the minimal window size, and there is a large gap in performance for PPMI-thresh as window size increases, such that for larger windows it is outperformed by other models.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 547, |
| "end": 555, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Window Size", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "36 For PPMI-SVD with WINDOWSIZE=8+8, a context word at distance D from its target word is weighted 8\u2212D 8 . For glove, the decay happens much faster, with a weight of 8 D , which allows the model to look farther away without being affected by potential noise introduced by distant contexts. 37 Henceforth, we omit results for EN-comp 90 and Reddy, as they are included in EN-comp.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Window Size", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "To assess which of these differences are statistically significant, we have performed Wilcoxon's sign-rank test on the two highest Spearman values for each DSM in each language. All differences are statistically significant (p < 0.05), with the exception of PPMI-SVD.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Window Size", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "The appropriate choice of window size has been shown to be task-specific (Lapesa and Evert 2017) , and the results above suggest that, for compositionality prediction, it depends also on the DSM used. Overall, the trend is for smaller windows to lead to better compositionality prediction.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 96, |
| "text": "(Lapesa and Evert 2017)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Window Size", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "When creating corpus-derived vectors with a DSM, the question is whether additional dimensions can be informative in compositionality prediction. Our hypothesis is that the larger the number of dimensions, the more precise the representations, and the more accurate the compositionality prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimension", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "The results shown in Figure 6 for each of the comparable data sets confirm this trend in the case of the best DSMs: w2v and PPMI-thresh. Moreover, the effect of changing the vector dimensions for the best models seems to be consistent across these languages. The results for PPMI-SVD, lexvec, and glove are more varied, but they are never among", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 21, |
| "end": 29, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dimension", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Best results for each DSM and DIMENSION, using BF 1 for Farahmand data set, and Spearman \u03c1 for all the other data sets. For English, the thin bars indicate results using fallback. Differences between two highest Spearman correlations for each model are statistically significant (p < 0.05), except for PPMI-SVD for FR-comp, according to Wilcoxon's sign-rank test. the best models for compositionality prediction in any of the languages. 38 All differences between the two highest Spearman correlations are statistically significant (p < 0.05), with the exception of PPMI-SVD for FR-comp, according to Wilcoxon's sign-rank test.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 6", |
| "sec_num": null |
| }, |
| { |
| "text": "In related work, DSMs are constructed from corpora with various levels of preprocessing (Bullinaria and Levy 2012; Pennington, Socher, and Manning 2014; Kiela and Clark 2014; Levy, Goldberg, and Dagan 2015; Salle, Villavicencio, and Idiart 2016) . In this work, we compare four levels: WORDFORM= surface + , surface, lemma PoS and lemma, described in Section 5.1, corresponding to decreasing amounts of information. Testing different varieties of corpus preprocessing allows us to explore the trade-off between informational content and the statistical significance related to data sparsity for compositionality prediction. Figure 7 presents the impact of different types of corpus preprocessing on the quality of compositionality prediction. In EN-comp, all differences between the two highest Spearman values for each DSM were significant, according to Wilcoxon's signrank test, except for PPMI-thresh, whereas in FR-comp and PT-comp they were significant only for PPMI-TopK and lexvec. However, note that the top two results are often both obtained on representations based on lemmas. If we compare the highest lemma-based result with the highest surface-based result for the same DSM, we find a statistically significant difference in every single case (p < 0.05).", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 114, |
| "text": "(Bullinaria and Levy 2012;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 115, |
| "end": 152, |
| "text": "Pennington, Socher, and Manning 2014;", |
| "ref_id": null |
| }, |
| { |
| "start": 153, |
| "end": 174, |
| "text": "Kiela and Clark 2014;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 175, |
| "end": 206, |
| "text": "Levy, Goldberg, and Dagan 2015;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 207, |
| "end": 245, |
| "text": "Salle, Villavicencio, and Idiart 2016)", |
| "ref_id": "BIBREF68" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 624, |
| "end": 632, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Type of Preprocessing", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "When considering the results themselves, although the results for English are heterogeneous, for French and Portuguese, the lemma-based representations consistently allow a better prediction of compositionality scores. This may be explained by the fact that these two languages are morphologically richer than English, and lemma-based representations reduce the sparsity in the data, allowing more information to be gathered from the same amount of data. Moreover, adding POS information (lemma PoS vs. lemma) does not seem to bring consistent improvements that are statistically significant. This suggests that words that share the same lemma are semantically close enough that any gains from disambiguation are masked by the sparsity of a higher vocabulary size. Finally, the impact of stopword removal is also inconclusive (surface vs. surface + ), considering the best models for each language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Type of Preprocessing", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "If we assume that the bigger the corpus, the better the DSM, this could explain why the results for English are better than those for French and Portuguese, although it does not explain why Portuguese is behind French. 39 In this section, we examine the impact of corpus size on prediction quality by incrementally increasing the amount of data used to generate the DSMs while monitoring the Spearman correlation (\u03c1) with the human annotations. We use only the best DSMs for these languages, PPMI-thresh and w2v-sg, with the configurations that produced highest Spearman scores for each full corpus.", |
| "cite_spans": [ |
| { |
| "start": 219, |
| "end": 221, |
| "text": "39", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus Size", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "As expected, the results in Figure 8 show a smooth, roughly monotonic increase of the \u03c1 values with corpus size, for PPMI-thresh and w2v-sg for each language and", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 28, |
| "end": 36, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Corpus Size", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "Best results for each DSM and WORDFORM, using BF 1 for Farahmand data set, and Spearman \u03c1 for all the other data sets. For English, the thin bars indicate results using fallback. In EN-comp all differences between the two highest Spearman values for each DSM were significant, according to Wilcoxon's sign-rank test, except for PPMI-thresh, while in FR-comp and PT-comp they were only significant for PPMI-TopK and lexvec. data set. 40 In all cases there is a clear saturation behavior, so that we can safely say that after one billion tokens, the quality of the predictions reaches a plateau and additional corpus fragments do not bring improvements. This suggests that differences in compositionality prediction performance for these languages cannot be totally explained by differences in corpus sizes.", |
| "cite_spans": [ |
| { |
| "start": 433, |
| "end": 435, |
| "text": "40", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "Up to this point, the predicted compositionality scores for the compounds were calculated using a uniform function that assumes that each component contributes 50% to the meaning of the compound (pc uniform ). However, this might not accurately capture a faithful representation of compounds whose meaning is more semantically related to one of the components (e.g., crocodile tears, which is semantically closer to the head tears; and night owl, which is semantically closer to the modifier night). As this may have an impact on the success of compositionality prediction, in this section we evaluate how different compositionality prediction functions model these compounds.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Influence of Compositionality Prediction Function", |
| "sec_num": "8." |
| }, |
| { |
| "text": "In particular, we proposed pc maxsim , (Section 4) for dynamically determining weights that assign maximal similarity between the compound and each of its components. We have also proposed pc geom , which favors idiomatic readings through the geometric mean of the similarities between a compound and its components. Our hypotheses are that pc maxsim will be better correlated with human scores for compositional and partly compositional compounds, while pc geom can better capture the semantics of idiomatic ones (Section 8.1). First, to verify whether other prediction functions improve results obtained for the best pc uniform configurations reported up to now, we have evaluated every strategy on all DSM configurations. Table 6 shows that the functions that combine both components (columns pc uniform to pc arith ) generate better compositionality predictions than functions that ignore one of the individual components (columns pc head and pc mod ). There is some variation among the combined scores, with the best score indicated in bold. Every best score is statistically different from all other scores in its row (p < 0.05). The results for pc arith and pc uniform are very similar, reflecting their similar formulations. 41 Here we focus on the issue of adjusting \u03b2 in the compositionally constructed vector; that is, we consider the use of pc maxsim instead of pc uniform . This score seems to be beneficial in the case of English (EN-comp), but not in the case of French or Portuguese.", |
| "cite_spans": [ |
| { |
| "start": 1233, |
| "end": 1235, |
| "text": "41", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 725, |
| "end": 732, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Influence of Compositionality Prediction Function", |
| "sec_num": "8." |
| }, |
| { |
| "text": "Spearman \u03c1 for the proposed compositionality prediction scores, using the best DSM configuration for each score. Table 7 presents the best pc maxsim model for each data set, along with the average weights assigned to head and modifier for every compound in the data set. Before analyzing the results in Table 7 , we have to verify whether the data sets are balanced for the influence of each component to the meaning of the whole, or if there is any bias towards heads/modifiers. The influence of the head, estimated as the average of hc H /(hc H + hc M ) over all compounds of a data set, is 0.50 for EN-comp, 0.52 for FRcomp, and 0.52 for PT-comp. This indicates that the data sets are balanced in terms of the influence of each component, and neither head nor modifier predominates as more compositional or idiomatic than the other. As for the average \u03b2 weights in pc maxsim , while the weights that maximize compositionality are fairly similar for EN-comp, they strongly favor the head for both FR-comp and PT-comp. This may be explained by the fact that, for the latter, the modifiers are all adjectives, while EN-comp has mostly nouns as modifiers. Surprisingly, this seemingly more realistic weighting of the compound components for French and Portuguese does not reflect in better compositionality scores, and does not correspond to the average influence of modifiers in these data sets, estimated as 0.48 on average. One possible explanation could be that, in these cases, the adjectives may be contributing to some specific more idiomatic meaning that is not found in isolated occurrences of the adjective itself, such as FR beau (lit. beautiful), which is used in the translation of most inlaw family members, such as FR beau-fr\u00e8re (lit. beautiful-brother 'brother-in-law'). In the next section, we investigate which compounds are affected the most by these different scores. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 113, |
| "end": 120, |
| "text": "Table 7", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 303, |
| "end": 310, |
| "text": "Table 7", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Table 6", |
| "sec_num": null |
| }, |
| { |
| "text": "To better evaluate the effect of adjusting \u03b2 for the individual compounds with respect to the pc uniform score, we define the rank improvement as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rank Improvement Analysis", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "improv f (w 1 w 2 ) = |rk uniform (w 1 w 2 ) \u2212 rk human (w 1 w 2 )| \u2212 |rk f (w 1 w 2 ) \u2212 rk human (w 1 w 2 )|,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rank Improvement Analysis", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "where rk indicates the rank of the compound w 1 w 2 in the data set when ordered according to pc uniform , human annotations hc HM , or the compositionality prediction function f . For instance, when f = maxsim, positive improv maxsim values indicate that pc maxsim yields a better approximation of the ranks assigned by hc HM than pc uniform , whereas negative values indicate that pc uniform provides a better ranking.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rank Improvement Analysis", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "We perform a cross-lingual analysis, grouping the hc HM scores of the EN-comp, FR-comp, and PT-comp into a unique data set (henceforth ALL-comp), containing 540 compounds. Figure 9 presents the values of rank improvement for the best PPMI-thresh and w2v-sg configurations, ranked according to hc HM (rk human ): compounds that are better predicted by pc maxsim have positive rank movements (above the 0 line). 42 The density of movement on either side of the 0 (no movement) line appears to be similar for both models with pc maxsim performing as well as pc uniform . Figure 9 also marks the outlier compounds with the highest improvements (numbers from 1 to 8) and those with the lowest improvements (letters from A to H), and Table 8 shows their improvement scores. In the case of these outliers, the adjustment seems to be more beneficial to compositional compounds than to idiomatic cases. This is confirmed by a linear regression of the movement of the 8+8 outliers as a function of the compositionality scores hc HM , where we obtain a positive coefficient of r = 0.73 and r = 0.72 for PPMI-thresh and w2v-sg, respectively. There are more outlier compounds for Portuguese and French (particularly the former), suggesting that pc maxsim has a stronger impact on those languages than on English. Moreover, some compounds had a similar improvement under both DSMs, with, for example, high improvement for PT caixa forte literally box strong 'safe' and low improvement for PT cora\u00e7\u00e3o partido 'broken heart'. In addition, pc maxsim also affected some equivalent compounds in different languages, as in the case of PT caixa forte and FR coffre fort. Overall, pc maxsim does not present a considerable impact on the predictions, obtaining an average improvement of improv maxsim = +0.41 across all compounds in ALL-comp. Figure 10 shows the same analysis for f = geom, showing the improvement score of pc geom over pc uniform . We hypothesized that pc geom should more accurately represent idiomatic compounds. From the previous sections, we know that pc geom has lower performance than pc uniform when used to estimate the compositionality of the entire data sets (cf. Table 6 ). This is confirmed by an average score of improv geom = \u22127.87. As in Figure 9 , Figure 10 shows a random distribution of improvements. However, the outliers have the opposite pattern, indicating that large reclassifications due to pc geom tend to favor idiomatic instead of compositional compounds. The linear regression of the movement of the outliers as a function of the compositionality scores results in r = \u22120.73 and r = \u22120.82 for PPMI-thresh and w2v-sg, respectively. These confirm our hypothesis for the behavior of pc geom . Table 9 lists the outlier compounds indicated in Figure 10 along with their improvement values. Here again, the majority of the outliers belong to PT-comp. Some of the compounds that were found as outliers in pc maxsim re-appear as outliers for pc geom with inverted polarity in the improvement score, such as the ranks predicted by PPMIthresh for PT prato feito literally plate made 'blue-plate special' (improv maxsim = +58, improv geom = \u2212234) and by w2v-sg for FR bras droit literally arm right 'assistant' (improv maxsim = \u221268, improv geom = +228). This suggests that, as future work, we should consider combining both approaches into a single prediction that decides which score to use for each compound as a function of pc uniform .", |
| "cite_spans": [ |
| { |
| "start": 410, |
| "end": 412, |
| "text": "42", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 172, |
| "end": 180, |
| "text": "Figure 9", |
| "ref_id": "FIGREF8" |
| }, |
| { |
| "start": 568, |
| "end": 576, |
| "text": "Figure 9", |
| "ref_id": "FIGREF8" |
| }, |
| { |
| "start": 728, |
| "end": 735, |
| "text": "Table 8", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 1820, |
| "end": 1829, |
| "text": "Figure 10", |
| "ref_id": "FIGREF10" |
| }, |
| { |
| "start": 2169, |
| "end": 2176, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 2248, |
| "end": 2256, |
| "text": "Figure 9", |
| "ref_id": "FIGREF8" |
| }, |
| { |
| "start": 2259, |
| "end": 2268, |
| "text": "Figure 10", |
| "ref_id": "FIGREF10" |
| }, |
| { |
| "start": 2713, |
| "end": 2720, |
| "text": "Table 9", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 2762, |
| "end": 2771, |
| "text": "Figure 10", |
| "ref_id": "FIGREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Rank Improvement Analysis", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "In the previous sections, we examined the performance of the compositionality prediction framework in terms of the correlation between automatic predictions and human judgments across languages. We now investigate the relation between predicted scores and other variables that may have an impact on results, such as familiarity (Section 9.1) and conventionalization (Section 9.2). We also compare the predicted compositionality scores with trends previously found in human scores (Section 9.3). The experiments focus on the ALL-comp data set, which groups the predicted scores from the best configurations on EN-comp, FR-comp, and PT-comp (cf . Table 4 ). ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 643, |
| "end": 652, |
| "text": ". Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Characterization of the Predicted Compositionality", |
| "sec_num": "9." |
| }, |
| { |
| "text": "Results from Section 3.2.2 show that the familiarity of compounds measured as frequency in large corpora is associated with the compositionality scores assigned by humans. We would like to know whether this correlation also holds true to system predictions: Are the most frequent compounds being predicted as more compositional? As expected, the rank correlation between frequency and pc uniform shows medium to Table 10 Spearman \u03c1 correlations between different variables. We consider the set of predicted scores (pc), the set of human-prediction differences (diff), the compound frequencies (freq), and the compound PMI. The predicted scores are the ones from the best configurations of each sub-data set in ALL-comp. Correlations are indicated only when significant (p < 0.05). Table 10 , column \u03c1[pc,freq]), though the level of correlation is somewhat DSM-dependent, are in line with the correlation observed between frequency and human scores, and with the high correlation between predicted and human scores.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 412, |
| "end": 420, |
| "text": "Table 10", |
| "ref_id": null |
| }, |
| { |
| "start": 781, |
| "end": 789, |
| "text": "Table 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicted Compositionality and Familiarity", |
| "sec_num": "9.1" |
| }, |
| { |
| "text": "DSM \u03c1(pc,freq) \u03c1(diff,freq) \u03c1(pc, PMI) \u03c1(diff,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicted Compositionality and Familiarity", |
| "sec_num": "9.1" |
| }, |
| { |
| "text": "Another hypothesis we test is whether frequent compounds are easier to model. A first intuition would be that this hypothesis is true, as a higher number of occurrences is associated with a larger amount of data, from which more representative vectors can be built. To test this hypothesis, we define a compound's difficulty as the difference between the predicted score and the normalized human score, diff = |pc \u2212 (hc HM /5)|, where high values indicate a compound whose compositionality is harder to predict. 43 We found a weak (though statistically significant) correlation between frequency and difficulty for some of the DSMs (Table 10 , column \u03c1[diff,freq]). They are mostly positive, indicating that frequency is correlated with difficulty, which is a surprising result, as it implies that the compositionality of rarer compounds was mildly easier to predict for these systems, disproving the hypothesis above. These results either point to an overall lack of correlation between frequency and difficulty, or indicate mild DSMspecific behavior, which should be investigated in further research.", |
| "cite_spans": [ |
| { |
| "start": 512, |
| "end": 514, |
| "text": "43", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 632, |
| "end": 641, |
| "text": "(Table 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicted Compositionality and Familiarity", |
| "sec_num": "9.1" |
| }, |
| { |
| "text": "PMI is not only a well-known estimator of the level of conventionalization of a multiword expression (Church and Hanks 1990; Evert 2004; Farahmand, Smith, and Nivre 2015) , but it is also used in some DSMs as a way to estimate the strength of association between target and context words. To assess if what our models are implicitly measuring is the association between the component words of a compound rather than compositionality, we now examine the correlation between compositionality scores and PMI.", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 124, |
| "text": "(Church and Hanks 1990;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 125, |
| "end": 136, |
| "text": "Evert 2004;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 137, |
| "end": 170, |
| "text": "Farahmand, Smith, and Nivre 2015)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicted Compositionality and Conventionalization", |
| "sec_num": "9.2" |
| }, |
| { |
| "text": "We found only a weak but statistically significant correlation between predicted compositionality and PMI (Table 10 , column \u03c1[pc, PMI]), which suggests that these DSMs preserve some information regarding conventionalization. However, given that no significant correlation between PMI and human compositionality scores was found (Section 3.2.2) and as DSM predictions are strongly correlated to human predictions, these results indicate that our models capture more than conventionalization. They may also be a feature of this particular set of compounds, as even the compositional cases are also conventional to some extent (e.g., white/?yellow wine). Therefore, further investigation of possible links between idiomaticity and conventionalization is needed.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 106, |
| "end": 115, |
| "text": "(Table 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicted Compositionality and Conventionalization", |
| "sec_num": "9.2" |
| }, |
| { |
| "text": "We also calculated the correlation between PMI and the human-prediction difference (diff), to determine if DSMs build less precise vectors for less conventionalized compounds (approximated as those with lower PMI). However, no statistically significant correlation was found for most DSMs (Table 10 , column \u03c1[diff, PMI]).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 289, |
| "end": 298, |
| "text": "(Table 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Predicted Compositionality and Conventionalization", |
| "sec_num": "9.2" |
| }, |
| { |
| "text": "Spearman correlation assesses the performance of a given configuration by providing a single numerical value. This facilitates the comparison between configurations, but it hides the internal distribution of predictions. By splitting the data sets into ranges, we obtain a more fine-grained view of possible patterns linked to compositionality prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Range-Based Analysis of Predicted Compositionality", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "To determine if the compounds that humans agree more on are also more accurately predicted, we divided ALL-comp into three equally sized subsets, according to the standard deviation among human annotators (low, mid-range, and high values of standard deviation, \u03c3 HM ). As high standard deviation indicates disagreement among annotators, it may be an indicator of the difficulty of the annotation. Table 11 presents the best DSMs, according to Spearman's \u03c1 evaluated separately on each of the subsets. Indeed, for the compounds that had low \u03c3 HM , the Spearman values were the highest (between 0.73 and 0.75), while for those with high \u03c3 HM , the Spearman correlation with human judgments was the lowest (between 0.35 and 0.43). These results confirm that higher scores are achieved for the compounds for which humans agree more, and suggest that part of the difficulty of this task for automatic systems is also related to difficulties for humans.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 397, |
| "end": 405, |
| "text": "Table 11", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Range-Based Analysis of Predicted Compositionality", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "To determine if compositional compounds would be more precisely predicted than idiomatic compounds, we divide ALL-comp into three equally sized subsets based on the level of human compositionality scores (low, mid-range, and high values of hc HM ). Table 11 presents the correlation obtained on each subset for the best configuration of each DSM. The more idiomatic compounds have the lowest Spearman values (from 0.16 to 0.29) while the more compositional have the highest ones (from 0.32 to 0.37). These results confirm that the predictions are better for compositional than for idiomatic compounds. Moreover, these scores are much lower than those from the full data set (from 0.63 to 0.66), suggesting that it may be harder to make fine-grained distinctions (e.g., between two compositional compounds like access road and subway system) than to make inter-range distinctions (e.g., between idiomatic and compositional compounds like ivory tower and access road). However, further investigation would be needed to verify this hypothesis.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 249, |
| "end": 257, |
| "text": "Table 11", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Range-Based Analysis of Predicted Compositionality", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "We proposed a framework for compositionality prediction of multiword expressions, focusing on nominal compounds and using DSMs for meaning representation. We investigated how accurately DSMs capture idiomaticity compared to human judgments and examined the impact of several variables in the accuracy of the predictions. In order to determine how language dependent the results are, we evaluated the compositionality prediction framework in English, French, and Portuguese, using data sets containing human-rated compositionality scores, some of which were specifically constructed as part of this work. 44 Using these data sets, we presented a large-scale evaluation involving 228 DSMs for each language, and we evaluated more than 9,000 framework configurations to determine the impact of possible factors that may influence compositionality prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "Our experiments confirmed that our framework is able to capture idiomaticity accurately, obtaining a strong correlation with human judgments for all three languages. Comparing the performance of different DSMs, the particular choice of DSM had a noticeable impact on the results, with differences over 0.10 Spearman \u03c1 points for all languages. For the comparable data sets (EN-comp, FR-comp, and PT-comp), the best models were w2v and PPMI-thresh. 45 Results differed according to language: although for English w2v were the best models, for French and Portuguese, PPMI-thresh outperformed the other models. Moreover, the results for the three languages varied considerably, with those for English outperforming by 0.10 and 0.20 Spearman \u03c1 points those for French and Portuguese, respectively. The latter are morphologically richer than the former, and a closer examination of the type of preprocessing adopted for best results reveals that both languages benefit from less sparse representations resulting from lemmatization and stopword removal, while for English no preprocessing was particularly beneficial.", |
| "cite_spans": [ |
| { |
| "start": 448, |
| "end": 450, |
| "text": "45", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "Although corpus size is often assumed to play a fundamental role in the quality of DSMs, so that the bigger the corpus the better the results, prediction quality stabilized at around one billion tokens for all languages. This may reflect the point where the minimum frequency was reached for producing reliable representations for all compounds in these data sets, even the rare cases, and larger corpora did not lead to better predictions. Moreover, for the best models in each language, DSMs with more dimensions resulted in more accurate predictions confirming our hypothesis. We also found a trend for small window sizes leading to better results for the best models in all three languages, contrary to our hypothesis. A typically good configuration used vectors of 750 dimensions built from minimal context windows of one word to each side of the target.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "DSMs were also robust regarding the choice of compositionality prediction function, with a uniform combination of the head and modifier producing the best results for all languages. Other functions like pc maxsim and pc geom , which modify these scores to account for different contributions of each component, produced at best similar results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "A deeper analysis of the predicted compositionality scores revealed that, similarly to human-rated scores, familiarity measured as frequency was positively correlated with predicted compositionality. In the case of conventionalization measured as PMI, no correlation was found with human-rated scores and only a mild correlation was found with some predicted scores, suggesting that our models capture more than compound conventionalization, as they have a strong agreement with human scores. Intra-compound standard deviation on human scores was also found to be related to predicted scores, indicating that DSMs have difficulties on those compounds that humans also found difficult. Moreover, predictions were found to be more accurate for compositional compounds.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "Although there are many questions that still need to be solved regarding compositionality, we believe that the results presented here advance significantly its understanding and computational modeling. Furthermore, the proposed framework opens important avenues of research that are ready to be pursued. First, the role of morphological inflection could be clarified by extending this investigation to even more inflected languages, such as Turkish. Moreover, other categories of MWEs such as verb+noun expressions should be evaluated to determine the interplay between compositionality prediction and syntactic flexibility of MWEs. The ultimate test would be to use predicted compositionality scores in downstream applications and tasks involving some degree of semantic processing, ranging from MWE identification to parsing, and word-sense disambiguation. In particular, it would be interesting to predict compositionality in context, in order to distinguish idiomatic from literal usages in sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "Appendix A. Glossary composition function is a function that takes as input a sequence of vectors v(w i ) to v(w j ) and outputs a compositionally constructed vector v \u2295 (w i . . . w j ) representing the compositional meaning of the sequence, where \u2295 indicates the function used to compose the vectors. Example:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "v \u03b2 (w 1 , w 2 ) = \u03b2 v(w 1 ) ||v(w 1 )|| + (1 \u2212 \u03b2) v(w 2 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "||v(w 2 )|| . 1, 23 compositionality prediction configuration is the combination of a particular DSM configuration with a given compositionality prediction function, fully specifying how a predicted compositionality score is calculated for a given word sequence w i . . . w j . 1 compositionality prediction framework is the set of all possible compositionality prediction configurations available. 1, 22", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "compositionality prediction function is a function that takes as input corpus-based vectors for a sequence of words v(w i . . . w j ) and for the individual words composing that sequence v(w i ) . . . v(w j ), and outputs a predicted compositionality score, usually proportional to the similarity between the corpus-based vector v(w i . . . w j ) and a compositionally constructed vector v(w i ) to v(w j ) derived from v(w i ) . . . v(w j ) using a composition function. Example: maxsim. 1 compositionally constructed vector is the output of a composition function, that is, a vector v \u2295 (w i . . . w j ) derived from the individual words' corpus-derived vectors v(w i ) to v(w j ). 1, 23", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "corpus-derived vector is the output of a DSM for a given element w i of the vocabulary V, that is, a corpus-derived D-dimensional real-numbered vector v(w i ) that represents the meaning of w i . A corpus-derived vector of a word sequence v(w i . . . w j ) is built by treating it as a single token in the corpus. 1, 22", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "distributional semantic model (DSM) is a function that takes as input a vocabulary V and a (large) corpus, and outputs a corpus-derived vector v(w i ) for each element predicted compositionality score (pc) is the output of a compositionality prediction function, that is, a real value representing the predicted compositionality of a word sequence w i . . . w j . The correlation between predicted compositionality (pc) and human compositionality (hc) scores is used to evaluate a compositionality prediction configuration. When subscripted, indicates the compositionality prediction function used to obtain the score. Example: pc uniform . 1, 29", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "w i of V", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "10." |
| }, |
| { |
| "text": "The number of possible DSM configurations grows exponentially with the number of internal variables in a DSM, forestalling the possibility of an exhaustive search for every possible parameter. We have evaluated in this article the set of variables that are most often manually tuned in the literature, but a reasonable question would be whether these results can be further improved through the modification of some other oftenignored model-specific parameters. We thus perform some sanity checks through a local search of such parameters around the highest-Spearman configuration of each DSM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Appendix B. Sanity Checks", |
| "sec_num": null |
| }, |
| { |
| "text": "Some of the DSMs in consideration on this paper are iterative: they re-read and reprocess the same corpus multiple times. For those DSMs, we present the results of running their best configuration, but using a higher number of iterations. This higher number of iterations is inspired by the models found in parts of the literature, where, for example, the number of glove iterations can be as high as 50 (Salle, Villavicencio, and Idiart 2016) or even 100 (Pennington, Socher, and Manning 2014). The intuition is that most models will lose some information (due to their probabilistic sampling), which could be regained at the cost of a higher number of iterations. Table 12 presents a comparison between the baseline \u03c1 for 15 iterations and the \u03c1 obtained when 100 iterations are performed. For all DSMs, we see that the increase in the number of iterations does not improve the quality of the vectors, with the relatively small number of 15 iterations yielding better results. This may suggest that a small number of iterations can already sample enough distributional information, with further iterations accruing additional noise from low-frequency words. The extra number of iterations could also be responsible for overfitting of the DSM to represent particularities of the corpus, which would reduce the quality of the underlying vectors. Given the extra cost of running more iterations, 46 we refrained from building further models with as many iterations in the rest of the article.", |
| "cite_spans": [ |
| { |
| "start": 404, |
| "end": 443, |
| "text": "(Salle, Villavicencio, and Idiart 2016)", |
| "ref_id": "BIBREF68" |
| }, |
| { |
| "start": 1395, |
| "end": 1397, |
| "text": "46", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 666, |
| "end": 674, |
| "text": "Table 12", |
| "ref_id": "TABREF12" |
| } |
| ], |
| "eq_spans": [], |
| "section": "B.1 Number of Iterations", |
| "sec_num": null |
| }, |
| { |
| "text": "Minimum-count thresholds are often neglected in the literature, where a default configuration of 0, 1, or 5 being presumably used by most authors. An exception to this trend is the threshold of 100 occurrences used by Levy, Goldberg, and Dagan (2015) , whose toolkit we use in PPMI-SVD. No explicit justification has been found for this higher word-count threshold. A reasonable hypothesis would be that higher thresholds improve the quality of the data, as it filters rare words more aggressively. Table 13 presents the result from the highest-Spearman configurations along with the results for an identical configuration with a higher occurrence threshold of 50. 47 The results unanimously agree that a higher threshold does not contribute to the removal of any extra noise. In particular, for PPMI-SVD, it seems to discard enough useful information to considerably reduce the quality of the compositionality prediction measure. The results strongly contradict the default configuration used for PPMI-SVD, suggesting that a lower word-count threshold might yield better results for this task.", |
| "cite_spans": [ |
| { |
| "start": 218, |
| "end": 250, |
| "text": "Levy, Goldberg, and Dagan (2015)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 665, |
| "end": 667, |
| "text": "47", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 499, |
| "end": 507, |
| "text": "Table 13", |
| "ref_id": "TABREF13" |
| } |
| ], |
| "eq_spans": [], |
| "section": "B.2 Minimum Count Threshold", |
| "sec_num": null |
| }, |
| { |
| "text": "For many models, the best window size found was either WINDOWSIZE = 1+1 or WINDOWSIZE = 4+4 (see Section 7.1). It is possible that a higher score could be obtained by a configuration in between. While a full exhaustive search would be the ideal solution, an initial approximation of the best 2+2 configuration could be obtained by running the experiments on the highest Spearman configurations, with the window size replaced by 2+2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B.3 Windows of Size 2+2", |
| "sec_num": null |
| }, |
| { |
| "text": "Results shown in Table 14 for a window size of 2+2 are consistently worse than the base model, indicating that the optimal configuration is likely the one that was obtained with window size of 1+1 or 4+4. This is further confirmed by the fact that most DSMs had the best configuration with window size of 1+1 or 8+8, with few cases of 4+4 as best model, which suggests that the quality of most configurations in the space of models is either monotonically increasing or decreasing with regards to these window sizes, favoring thus the configurations with more extreme WINDOWSIZE parameters. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 17, |
| "end": 25, |
| "text": "Table 14", |
| "ref_id": "TABREF14" |
| } |
| ], |
| "eq_spans": [], |
| "section": "B.3 Windows of Size 2+2", |
| "sec_num": null |
| }, |
| { |
| "text": "As seen in Section 7.2, some DSMs obtain better results when moving from 250 to 500 dimensions, and this trend continues when moving to 750 dimensions. This behavior is notably stronger for PPMI-thresh, which suggests that an even higher number of dimensions could have better predictive power. Table 15 presents the result of running PPMI-thresh for increasing values of of the DIMENSION parameter. The baseline configuration (indicated as in Table 15 ) was the highest-scoring configuration found in Section 7.2: lemma PoS .W 1 .d 750 for PT-comp and FR-comp, and surface.W 8 .d 750 for Reddy. As seen in Section 7.2, results for 250 and 500 dimensions have lower scores than the results for 750 dimensions. Results for 1,000 dimensions were mixed: they are slightly worse for FR-comp and EN-comp, and slightly better for PT-comp. Increasing the number of dimensions generates models that are progressively worse. These results suggest that the maximum vector quality is achieved between 750 and 1,000 dimensions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 295, |
| "end": 303, |
| "text": "Table 15", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 444, |
| "end": 452, |
| "text": "Table 15", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "B.4 Higher Number of Dimensions", |
| "sec_num": null |
| }, |
| { |
| "text": "The word vectors generated by the glove and w2v models have some level of nondeterminism caused by random initialization and random sampling techniques. A reasonable concern would be whether the results presented for different parameter variations are close enough to the scores obtained by an average model. To assess the variability of these models, we evaluated three different runs of every DSM configuration (the original execution \u03c1 1 , used elsewhere in this article, along with two other executions \u03c1 2 and \u03c1 3 ) for glove, w2v-cbow, and w2v-sg. We then calculate the average \u03c1 avg of these three executions for every model. Table 16 reports the highest-Spearman configurations of \u03c1 avg for the Reddy and EN-comp data sets. When comparing \u03c1 avg to the results of the original execution \u03c1 1 , we see that the variability in the different executions of the same configuration is minimal. This is further confirmed by the low sample standard deviation 48 obtained from the scores of the three executions. Given the high stability of these models, results in the rest of the article were calculated and reported as \u03c1 1 for all data sets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 633, |
| "end": 641, |
| "text": "Table 16", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "B.5 Random Initialization", |
| "sec_num": null |
| }, |
| { |
| "text": "Along with the verification of parameters, we also evaluate whether data set variations could yield better results. In particular, we consider the use of filtering techniques, which are used in the literature as a method of guaranteeing data set quality. As per Roller, Schulte im Walde, and Scheible (2013), we consider two strategies of data removal: (1) removing individual outlier compositionality judgments through z-score filtering; and (2) removing all annotations from outlier human judges. A compositionality judgment is considered an outlier if it stands at more than z standard deviations away from the mean; a human judge is deemed an outlier if its Spearman correlation to the average of the others \u03c1 oth is lower than a given threshold R. 49 These methods allow us to remove accidentally erroneous annotations, as well as annotators whose response deviated too much from the mean (in particular, spammers and non-native speakers). Table 17 presents the evaluation of raw and filtered data sets regarding two quality measures: The average of the standard deviations for all NCs (\u03c3); and the proportion of NCs in the data set whose standard deviation is higher than 1.5 (P \u03c3>1.5 ), as per Reddy, McCarthy, and Manandhar (2011) . The results suggest that filtering techniques can improve the overall quality of the data sets, as seen in the reduction of the proportion of NCs with high standard deviation, as well as in the reduction of the average standard deviation itself. We additionally present the data retention rate (DRR), which is the proportion of NCs that remained in the data set after filtering. While the DRR does indicate a reduction in the amount of data, this reduction may be considered acceptable in light of the improvement suggested by the quality measures.", |
| "cite_spans": [ |
| { |
| "start": 1201, |
| "end": 1238, |
| "text": "Reddy, McCarthy, and Manandhar (2011)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 945, |
| "end": 953, |
| "text": "Table 17", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "B.6 Data Filtering", |
| "sec_num": null |
| }, |
| { |
| "text": "On a more detailed analysis, we have verified that the improvement in these quality measures is heavily tied to the use of z-score filtering, with similar results obtained when it is considered alone. The application of R-filtering by itself, on the other hand, did not show any noticeable improvement in the quality measures for reasonable amounts of DRR. This is the opposite from what was found by Roller, Schulte im Walde, and Scheible (2013) on their German data set, where only R-filtering was found to improve results under these quality measures. We present our findings in more detail in .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B.6 Data Filtering", |
| "sec_num": null |
| }, |
| { |
| "text": "We then consider whether filtering can have an impact on the performance of predicted compositionality scores. For each of the 228 model configurations that were constructed for each language, we launched an evaluation on the filtered EN-comp 90 , FRcomp, and PT-comp data sets (using z-score filtering only, as it was responsible for most of the improvement in quality measures). Overall, no improvement was observed in the results of the prediction (values of Spearman \u03c1) when we compare raw and filtered data sets. Looking more specifically at the best configurations for each DSM (see Table 18 ), we can see that most results do not significantly change when the evaluation is performed on the raw or filtered data sets. This suggests that the amount of judgments collected for each compound greatly offsets any irregularity caused by outliers, making the use of filtering techniques superfluous.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 589, |
| "end": 597, |
| "text": "Table 18", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "B.6 Data Filtering", |
| "sec_num": null |
| }, |
| { |
| "text": "The questionnaire was structured in five subtasks, presented to the annotators through these instructions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Appendix C. Questionnaire", |
| "sec_num": null |
| }, |
| { |
| "text": "Read the compound itself.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "Read 3 sentences containing the compound.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Provide 2 to 3 synonym expressions for the target compound seen in the sentences, preferably involving one of the words in the compound. We ask annotators to prioritize short expressions, with 1 to 3 words each, and to try to include the MWE components in their reply (eliciting a paraphrase).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "Using a Likert scale from 0 to 5, judge how much of the meaning of the compound comes from the modifier and the head separately. Figure 11 shows an example for the judgment of the head.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 129, |
| "end": 138, |
| "text": "Figure 11", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "Using a Likert scale from 0 to 5, judge how much of the meaning of the compound comes from its components.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "Evaluating compositionality of a compound regarding its head.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "We require answers in an even-numbered scale (there are 6 possibilities between 0 and 5), as otherwise the participants could be biased toward the middle score. In order to help participants visualize the meaning of their reply, whenever their mouse hovers over a particular score, we present a guiding tooltip, as can be seen in Figure 11 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 330, |
| "end": 339, |
| "text": "Figure 11", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "The order of subtasks has also been taken into account. During a pilot test, we found that presenting the multiple-choice questions (subtasks 4-5) before asking for synonyms (subtask 3) yielded lower agreement, as users were often less self-consistent in the multiple-choice questions (e.g., replying \"non-compositional\" for subtask 4 but \"compositional\" for subtask 5), even if they carefully selected their synonyms in response to subtask 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "The request for synonyms before the multiple-choice questions prompts the participants to focus on the meaning of the compound. These synonyms can then also be taken into account when considering the semantic contribution of each element of the compound-we leave this for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "We present below the 90 nominal compounds in EN-comp 90 and the 100 nominal compounds in EN-comp Ext , along with their human-rated compositionality scores. We refer to Reddy, McCarthy, and Manandhar (2011) for the other 90 compounds belonging to Reddy which, together with the former two sets, represent 280 nominal compounds in total. ", |
| "cite_spans": [ |
| { |
| "start": 169, |
| "end": 206, |
| "text": "Reddy, McCarthy, and Manandhar (2011)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Appendix D. List of English Compounds", |
| "sec_num": null |
| }, |
| { |
| "text": "Attributed toFrege (1892Frege ( /1960.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "After dimensionality reduction, nowadays word vectors are often called word embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The task of determining whether a phrase is compositional is closely related to MWE discovery(Constant et al. 2017), which aims to automatically extract MWE lists from corpora.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The terms noun compound and compound noun are usually reserved for nominal compounds formed by sequences of nouns only, typical of Germanic languages but not frequent in Romance languages. 6 In this article, examples are preceded by their language codes: EN for English, FR for French, and PT for Brazilian Portuguese. In the absence of a language code, English is implied.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Nakov (2008) also proposes a method for automatically extracting paraphrases from the web to classify nominal compounds. This was extended in a SemEval 2013 task, where participants had to rank free paraphrases according to the semantic relations in the compounds(Hendrickx et al. 2013).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For English, only EN-comp 90 and EN-comp Ext (90 and 100 new compounds, respectively) are considered. Reddy (included in EN-comp) is analyzed inReddy, McCarthy, and Manandhar (2011).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We have not attempted to select compounds that are translations of each other, as a compound in a given language may be realized differently in the other languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Freely available at: http://pageperso.lis-lab.fr/~carlos.ramisch/?page=downloads/compounds", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Participants with negative correlation with the mean, and answers farther than \u00b11.5 from the mean. 12 Only FR-comp is shown as the other data sets display similar patterns.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A disagreement between answers a and b is weighted |a \u2212 b|.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "r 2 arith and r 2 geom are .91 and .96 in PT-comp, .90 and .96 in EN-comp 90 , and .92 and .95 in EN-comp Ext .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Implemented as feat compositionality.py in the mwetoolkit: http://mwetoolkit.sf.net. 16 Except when explicitly indicated, the term vector refers to corpus-derived vectors output by DSMs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In practice, for the special case of two words, we do not need to perform parameter search for \u03b2, which has a closed form obtained by solving the equation \u2202 \u2202\u03b2 pc \u03b2 (w 1 w 2 ) = 0: \u03b2 = cos(w 1 w 2 ,w 1 ) \u2212 cos(w 1 w 2 ,w 2 ) \u00d7 cos(w 1 ,w 2 ) (cos(w 1 w 2 ,w 1 ) + cos(w 1 w 2 ,w 2 )) \u00d7 (1\u2212cos(w 1 ,w 2 )) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://corpusbrasileiro.pucsp.br/cb/Inicial.html 19 Wikipedia articles downloaded on June 2016. 20 Hyphenated compounds are also re-tokenized with an underscore separator. 21 Therefore, in Section 5.2, the terms target/context words may actually refer to compounds. 22 https://hunspell.github.io 23 In the lemmatized corpora, the lemmas of proper names are replaced by placeholders.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/alexandres/lexvec 31 This is in line with the authors' threshold suggestions(Salle, Villavicencio, and Idiart 2016). 32 Common window sizes are between 1+1 and 10+10, but a few works adopt larger sizes like 16+16 or 20+20(Kiela and Clark 2014;Lapesa and Evert 2014).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A compound is considered as noncompositional if at least 2 out of 4 annotators annotate it as noncompositional. 34 This refers to 5 out of 180 in EN-comp and 129 out of 1,042 in Farahmand. For these, the fallback strategy assigns the average compositionality score. Although fallback produces slightly better results for EN-comp, it does the opposite for Farahmand, which contains a larger proportion of missing compounds (2.8% vs. 12.4%).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We have also considered separating folds so as to be balanced regarding their compositionality scores. The results were similar to the ones reported here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For PPMI-SVD and lexvec, this behavior might be related to the fact that both methods perform a factorization of the PPMI matrix. 39 As the characteristics of Farahmand are different from the other data sets, in this analysis we only use the other more comparable data sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For PPMI-thresh, eight different samplings of corpus fragments were performed (for a total of 800 DSMs per language), with each y-axis data point presenting the average and standard deviation of the \u03c1 obtained from those samplings. For w2v-sg, since it is much more time-consuming, a single sampling was used, and thus only one execution was performed for each datapoint (for a total of 100 DSMs per language).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The Pearson correlations (averaged across 7 DSMs) between pc arith and pc uniform are r = .972 for EN-comp, r = .991 for FR-comp, and r = .969 for PT-comp, confirming their similar results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We focus on one representative of PPMI-based DSMs and one representative of word-embedding models. Similar results were observed for the best configurations of other DSMs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We linearly normalize predicted scores to be between 0 and 1. However, given that negative scores are rare in practice, unreported correlation with non-normalized pc are similar to the ones reported.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The resulting data sets and framework implementation are freely available to the community. 45 As Farahmand is considerably different from the other data sets, a direct comparison is not possible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The running time grows linearly with the number of iterations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The threshold used for \u03c1 base depends on the DSM, and is described in Section 5.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The low standard deviation is not a unique property of high-ranking configurations: The average of deviations for all models was .004 for EN-comp and .006 for Reddy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The judgment threshold we adopted was z = 2.2 for EN-comp 90 , z = 2.2 for PT-comp, and z = 2.5 for FR-comp. The human judge threshold was R = 0.5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work has been partly funded by projects PARSEME (Cost Action IC1207), PARSEME-FR (ANR-14-CERA-0001), AIM-WEST (FAPERGS-INRIA 1706-2551/ 13-7), CNPq (312114/2015-0, 423843/2016-8) \"Simplifica\u00e7\u00e3o Textual de Express\u00f5es Complexas,\" sponsored by Samsung Eletr\u00f4nica da Amaz\u00f4nia Ltda. under the terms of Brazilian federal law No. 8.248/91. We would like to thank the anonymous reviewers who provided numerous helpful suggestions, Alexis Nasr for reviewing earlier versions of this article, Rodrigo Wilkens and Leonardo Zilio for contributing to the data set creation, and all anonymous annotators who judged the compositionality of compounds.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": "We present below the 180 nominal compounds in FR-comp, along with their humanrated compositionality scores. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Appendix E. List of French Compounds", |
| "sec_num": null |
| }, |
| { |
| "text": "We present below the 180 nominal compounds in PT-comp, along with their humanrated compositionality scores. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Appendix F. List of Portuguese Compounds", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A study on similarity and relatedness using distributional and wordnet-based approaches", |
| "authors": [ |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "Enrique", |
| "middle": [], |
| "last": "Alfonseca", |
| "suffix": "" |
| }, |
| { |
| "first": "Keith", |
| "middle": [ |
| "B" |
| ], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Jana", |
| "middle": [], |
| "last": "Kravalova", |
| "suffix": "" |
| }, |
| { |
| "first": "Marius", |
| "middle": [], |
| "last": "Pasca", |
| "suffix": "" |
| }, |
| { |
| "first": "Aitor", |
| "middle": [], |
| "last": "Soroa", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings", |
| "volume": "", |
| "issue": "", |
| "pages": "19--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Agirre, Eneko, Enrique Alfonseca, Keith B. Hall, Jana Kravalova, Marius Pasca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, May 31-June 5, 2009, pages 19-27, Boulder, CO.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Inter-coder agreement for computational linguistics", |
| "authors": [ |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Artstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "4", |
| "pages": "555--596", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Artstein, Ron, and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555-596.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Multiword expressions", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| }, |
| { |
| "first": "Su", |
| "middle": [ |
| "Nam" |
| ], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Handbook of Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "267--292", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baldwin, Timothy, and Su Nam Kim. 2010. Multiword expressions. In Nitin Indurkhya and Fred J. Damerau, editors, Handbook of Natural Language Processing, 2nd edition. CRC Press, Taylor and Francis Group, Boca Raton, FL, pages 267-292.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A statistical approach to the semantics of verb-particles", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Bannard", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Lascarides", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment", |
| "volume": "18", |
| "issue": "", |
| "pages": "65--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bannard, Colin, Timothy Baldwin, and Alex Lascarides. 2003. A statistical approach to the semantics of verb-particles. In Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment (Volume 18), pages 65-72, Stroudsburg, PA.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The wacky wide web: A collection of very large linguistically processed web-crawled corpora", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvia", |
| "middle": [], |
| "last": "Bernardini", |
| "suffix": "" |
| }, |
| { |
| "first": "Adriano", |
| "middle": [], |
| "last": "Ferraresi", |
| "suffix": "" |
| }, |
| { |
| "first": "Eros", |
| "middle": [], |
| "last": "Zanchetta", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Language Resources and Evaluation", |
| "volume": "43", |
| "issue": "3", |
| "pages": "209--226", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baroni, Marco, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: A collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43(3):209-226.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Dinu", |
| "suffix": "" |
| }, |
| { |
| "first": "Germ\u00e1n", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "238--247", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baroni, Marco, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238-247, Baltimore.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Distributional memory: A general framework for corpus-based semantics", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Lenci", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Computational Linguistics", |
| "volume": "36", |
| "issue": "4", |
| "pages": "673--721", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baroni, Marco, and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673-721.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The Parsing System \"palavras\": Automatic Grammatical Analysis of Portuguese in a Constraint Grammar Framework", |
| "authors": [ |
| { |
| "first": "Eckhard", |
| "middle": [], |
| "last": "Bick", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bick, Eckhard. 2000. The Parsing System \"palavras\": Automatic Grammatical Analysis of Portuguese in a Constraint Grammar Framework. Ph.D. thesis, University of Aarhus.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Identification of multiword expressions in the brWaC", |
| "authors": [ |
| { |
| "first": "Rodrigo", |
| "middle": [], |
| "last": "Boos", |
| "suffix": "" |
| }, |
| { |
| "first": "Kassius", |
| "middle": [], |
| "last": "Prestes", |
| "suffix": "" |
| }, |
| { |
| "first": "Aline", |
| "middle": [], |
| "last": "Villavicencio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "14--1429", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boos, Rodrigo, Kassius Prestes, and Aline Villavicencio. 2014. Identification of multiword expressions in the brWaC. In Proceedings of the Conference on Language Resources and Evaluation 2014, pages 728-735, ELRA. ACL Anthology Identifier: L14-1429.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A generalisation of lexical functions for composition in distributional semantics", |
| "authors": [ |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bride", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Van De Cruys", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Asher", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "In Association for Computational Linguistics", |
| "volume": "", |
| "issue": "1", |
| "pages": "281--291", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bride, Antoine, Tim Van de Cruys, and Nicholas Asher. 2015. A generalisation of lexical functions for composition in distributional semantics. In Association for Computational Linguistics (1), pages 281-291.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Extracting semantic representations from word co-occurrence statistics: Stop-lists, stemming, and SVD", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "A" |
| ], |
| "last": "Bullinaria", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [ |
| "P" |
| ], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Behavior Research Methods", |
| "volume": "44", |
| "issue": "3", |
| "pages": "890--907", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bullinaria, John A., and Joseph P. Levy. 2012. Extracting semantic representations from word co-occurrence statistics: Stop-lists, stemming, and SVD. Behavior Research Methods, 44(3):890-907.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "How to account for idiomatic German support verb constructions in statistical machine translation", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Camacho-Collados", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Jos\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Taher Pilehvar", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "242--245", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Camacho-Collados, Jos\u00e9, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A framework for the construction of monolingual and cross-lingual word similarity datasets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 1-7, Beijing. Cap, Fabienne, Manju Nirmal, Marion Weller, and Sabine Schulte im Walde. 2015. How to account for idiomatic German support verb constructions in statistical machine translation. In Proceedings of the 11th Workshop on Multiword Expressions, pages 19-28, Association for Computational Linguistics, Denver. Carpuat, Marine, and Mona Diab. 2010. Task-based evaluation of multiword expressions: A pilot study in statistical machine translation. In Proceedings of NAACL/HLT 2010, pages 242-245, Los Angeles.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Word association norms, mutual information, and lexicography", |
| "authors": [ |
| { |
| "first": "Kenneth", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hanks", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational Linguistics", |
| "volume": "16", |
| "issue": "1", |
| "pages": "22--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Church, Kenneth Ward, and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A coefficient of agreement for nominal scales", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 1960, |
| "venue": "Educational and Psychological Measurement", |
| "volume": "20", |
| "issue": "1", |
| "pages": "37--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cohen, Jacob. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37-46.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Multiword expression processing: A survey", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Constant", |
| "suffix": "" |
| }, |
| { |
| "first": "G\u00fcl\u015fen", |
| "middle": [], |
| "last": "Mathieu", |
| "suffix": "" |
| }, |
| { |
| "first": "Johanna", |
| "middle": [], |
| "last": "Eryigit", |
| "suffix": "" |
| }, |
| { |
| "first": "Lonneke", |
| "middle": [], |
| "last": "Monti", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Der", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Plas", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Amalia", |
| "middle": [], |
| "last": "Rosner", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Todirascu", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Computational Linguistics", |
| "volume": "43", |
| "issue": "4", |
| "pages": "837--892", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Constant, Mathieu, G\u00fcl\u015fen Eryigit, Johanna Monti, Lonneke Van Der Plas, Carlos Ramisch, Michael Rosner, and Amalia Todirascu. 2017. Multiword expression processing: A survey. Computational Linguistics, 43(4):837-892.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Predicting the compositionality of nominal compounds: Giving word embeddings a hard time", |
| "authors": [ |
| { |
| "first": "Silvio", |
| "middle": [], |
| "last": "Cordeiro", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Idiart", |
| "suffix": "" |
| }, |
| { |
| "first": "Aline", |
| "middle": [], |
| "last": "Villavicencio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1986--1997", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cordeiro, Silvio, Carlos Ramisch, Marco Idiart, and Aline Villavicencio. 2016. Predicting the compositionality of nominal compounds: Giving word embeddings a hard time. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1986-1997, Berlin.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "mwetoolkit+sem: Integrating word embeddings in the mwetoolkit for semantic MWE processing", |
| "authors": [ |
| { |
| "first": "Silvio", |
| "middle": [], |
| "last": "Cordeiro", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Aline", |
| "middle": [], |
| "last": "Villavicencio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "1221--1225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cordeiro, Silvio, Carlos Ramisch, and Aline Villavicencio. 2016. mwetoolkit+sem: Integrating word embeddings in the mwetoolkit for semantic MWE processing. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1221-1225, European Language Resources Association (ELRA), Paris.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Scaling context space", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Moens", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "231--238", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Curran, James R., and Marc Moens. 2002. Scaling context space. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 231-238.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Indexing by latent semantic analysis", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Deerwester", |
| "suffix": "" |
| }, |
| { |
| "first": "Susan", |
| "middle": [ |
| "T" |
| ], |
| "last": "Scott", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [ |
| "W" |
| ], |
| "last": "Dumais", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "K" |
| ], |
| "last": "Furnas", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Landauer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Harshman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Journal of the American Society for Information Science", |
| "volume": "41", |
| "issue": "6", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Deerwester, Scott, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A multiword expression data set: Annotating non-compositionality and conventionalization for English noun compounds", |
| "authors": [ |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Evert", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Farahmand", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Meghdad", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "The Statistics of Word Cooccurrences: Word Pairs and Collocations", |
| "volume": "", |
| "issue": "", |
| "pages": "29--33", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Evert, Stefan. 2004. The Statistics of Word Cooccurrences: Word Pairs and Collocations. Ph.D. thesis, Institut f\u00fcr maschinelle Sprachverarbeitung, University of Stuttgart, Stuttgart, Germany. Farahmand, Meghdad, Aaron Smith, and Joakim Nivre. 2015. A multiword expression data set: Annotating non-compositionality and conventionalization for English noun compounds. In Proceedings of the 11th Workshop on Multiword Expressions, pages 29-33, Association for Computational Linguistics, Denver.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Unsupervised type and token identification of idiomatic expressions", |
| "authors": [ |
| { |
| "first": "Afsaneh", |
| "middle": [], |
| "last": "Fazly", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "" |
| }, |
| { |
| "first": "Suzanne", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computational Linguistics", |
| "volume": "35", |
| "issue": "1", |
| "pages": "61--103", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fazly, Afsaneh, Paul Cook, and Suzanne Stevenson. 2009. Unsupervised type and token identification of idiomatic expressions. Computational Linguistics, 35(1):61-103.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Identifying bad semantic neighbors for improving distributional thesauri", |
| "authors": [ |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Ferret", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Association for Computational Linguistics (1)", |
| "volume": "", |
| "issue": "", |
| "pages": "561--571", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ferret, Olivier. 2013. Identifying bad semantic neighbors for improving distributional thesauri. In Association for Computational Linguistics (1), pages 561-571.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Detecting multi-word expressions improves word sense disambiguation", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Finlayson", |
| "suffix": "" |
| }, |
| { |
| "first": "Nidhi", |
| "middle": [], |
| "last": "Kulkarni", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Association for Computational Linguistics 2011 Workshop on MWEs", |
| "volume": "", |
| "issue": "", |
| "pages": "20--24", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Finlayson, Mark, and Nidhi Kulkarni. 2011. Detecting multi-word expressions improves word sense disambiguation. In Proceedings of the Association for Computational Linguistics 2011 Workshop on MWEs, pages 20-24, Portland, OR.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "A synopsis of linguistic theory, 1930-1955", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "R" |
| ], |
| "last": "Firth", |
| "suffix": "" |
| } |
| ], |
| "year": 1957, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "168--205", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Firth, John R. 1957. A synopsis of linguistic theory, 1930-1955. In F. R. Palmer, ed., Selected Papers of J. R. Firth, pages 168-205, Longman, London.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [ |
| "L" |
| ], |
| "last": "Fleiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 1973, |
| "venue": "Educational and Psychological Measurement", |
| "volume": "33", |
| "issue": "3", |
| "pages": "613--619", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fleiss, Joseph L., and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and Psychological Measurement, 33(3):613-619.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "\u00dcber sinn und bedeutung. Zeitschrift f\u00fcr Philosophie und philosophische Kritik", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Frege", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Translated, as 'On Sense and Reference", |
| "volume": "100", |
| "issue": "", |
| "pages": "25--50", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frege, Gottlob. 1892/1960.\u00dcber sinn und bedeutung. Zeitschrift f\u00fcr Philosophie und philosophische Kritik, 100:25-50. Translated, as 'On Sense and Reference,' by Max Black.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "New experiments in distributional representations of synonymy", |
| "authors": [ |
| { |
| "first": "Dayne", |
| "middle": [], |
| "last": "Freitag", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Blume", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Byrnes", |
| "suffix": "" |
| }, |
| { |
| "first": "Edmond", |
| "middle": [], |
| "last": "Chow", |
| "suffix": "" |
| }, |
| { |
| "first": "Sadik", |
| "middle": [], |
| "last": "Kapadia", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Rohwer", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiqiang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Freitag, Dayne, Matthias Blume, John Byrnes, Edmond Chow, Sadik Kapadia, Richard Rohwer, and Zhiqiang Wang. 2005. New experiments in distributional representations of synonymy. In Proceedings of the Ninth Conference on Computational Natural Language Learning, pages 25-32.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "On the semantics of noun compounds", |
| "authors": [ |
| { |
| "first": "Roxana", |
| "middle": [], |
| "last": "Girju", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Moldovan", |
| "suffix": "" |
| }, |
| { |
| "first": "Marta", |
| "middle": [], |
| "last": "Tatu", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Antohe", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Computer Speech & Language", |
| "volume": "19", |
| "issue": "4", |
| "pages": "479--496", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Girju, Roxana, Dan Moldovan, Marta Tatu, and Daniel Antohe. 2005. On the semantics of noun compounds. Computer Speech & Language, 19(4):479-496.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Compositionality, Chapter 24. Routledge, Amsterdam. Guevara, Emiliano", |
| "authors": [ |
| { |
| "first": "Adele", |
| "middle": [ |
| "E" |
| ], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Ninth International Conference on Computational Semantics, IWCS '11", |
| "volume": "", |
| "issue": "", |
| "pages": "135--144", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Goldberg, Adele E. 2015. Compositionality, Chapter 24. Routledge, Amsterdam. Guevara, Emiliano. 2011. Computing semantic compositionality in distributional semantics. In Proceedings of the Ninth International Conference on Computational Semantics, IWCS '11, pages 135-144, Association for Computational Linguistics, Stroudsburg, PA.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Distributional structure. Word", |
| "authors": [ |
| { |
| "first": "Zellig", |
| "middle": [], |
| "last": "Harris", |
| "suffix": "" |
| } |
| ], |
| "year": 1954, |
| "venue": "", |
| "volume": "10", |
| "issue": "", |
| "pages": "146--162", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Harris, Zellig. 1954. Distributional structure. Word, 10:146-162.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Learning compositionality functions on word embeddings for modelling attribute meaning in adjective-noun phrases", |
| "authors": [ |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Hartung", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabian", |
| "middle": [], |
| "last": "Kaupmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Soufian", |
| "middle": [], |
| "last": "Jebbara", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Cimiano", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 15th Meeting of the European Chapter of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "54--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hartung, Matthias, Fabian Kaupmann, Soufian Jebbara, and Philipp Cimiano. 2017. Learning compositionality functions on word embeddings for modelling attribute meaning in adjective-noun phrases. In Proceedings of the 15th Meeting of the European Chapter of the Association for Computational Linguistics (Volume 1), pages 54-64.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Semeval-2013 task 4: Free paraphrases of noun compounds", |
| "authors": [ |
| { |
| "first": "Iris", |
| "middle": [], |
| "last": "Hendrickx", |
| "suffix": "" |
| }, |
| { |
| "first": "Zornitsa", |
| "middle": [], |
| "last": "Kozareva", |
| "suffix": "" |
| }, |
| { |
| "first": "Preslav", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| }, |
| { |
| "first": "Diarmuid\u00f3", |
| "middle": [], |
| "last": "S\u00e9aghdha", |
| "suffix": "" |
| }, |
| { |
| "first": "Stan", |
| "middle": [], |
| "last": "Szpakowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Tony", |
| "middle": [], |
| "last": "Veale", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of *SEM 2013", |
| "volume": "2", |
| "issue": "", |
| "pages": "138--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hendrickx, Iris, Zornitsa Kozareva, Preslav Nakov, Diarmuid\u00d3 S\u00e9aghdha, Stan Szpakowicz, and Tony Veale. 2013. Semeval-2013 task 4: Free paraphrases of noun compounds. In Proceedings of *SEM 2013 (Volume 2 -SemEval), pages 138-143, Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Propbank annotation of multilingual light verb constructions", |
| "authors": [ |
| { |
| "first": "Jena", |
| "middle": [ |
| "D" |
| ], |
| "last": "Hwang", |
| "suffix": "" |
| }, |
| { |
| "first": "Archna", |
| "middle": [], |
| "last": "Bhatia", |
| "suffix": "" |
| }, |
| { |
| "first": "Clare", |
| "middle": [], |
| "last": "Bonial", |
| "suffix": "" |
| }, |
| { |
| "first": "Aous", |
| "middle": [], |
| "last": "Mansouri", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashwini", |
| "middle": [], |
| "last": "Vaidya", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the LAW 2010", |
| "volume": "", |
| "issue": "", |
| "pages": "82--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hwang, Jena D., Archna Bhatia, Clare Bonial, Aous Mansouri, Ashwini Vaidya, Nianwen Xue, and Martha Palmer. 2010. Propbank annotation of multilingual light verb constructions. In Proceedings of the LAW 2010, pages 82-90, Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Towards a better semantic role labelling of complex predicates", |
| "authors": [ |
| { |
| "first": "Glorianna", |
| "middle": [], |
| "last": "Jagfeld", |
| "suffix": "" |
| }, |
| { |
| "first": "Lonneke", |
| "middle": [], |
| "last": "Van Der Plas", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of NAACL Student Research Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "33--39", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jagfeld, Glorianna, and Lonneke van der Plas. 2015. Towards a better semantic role labelling of complex predicates. In Proceedings of NAACL Student Research Workshop, pages 33-39, Denver.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Speech and Language Processing", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "H" |
| ], |
| "last": "Martin", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jurafsky, Daniel, and James H. Martin. 2009. Speech and Language Processing, 2nd", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "A systematic study of semantic vector space model parameters", |
| "authors": [ |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC) at EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "21--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kiela, Douwe, and Stephen Clark. 2014. A systematic study of semantic vector space model parameters. In Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC) at EACL, pages 21-30.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Distinguishing literal and non-literal usage of German particle verbs", |
| "authors": [ |
| { |
| "first": "Maximilian", |
| "middle": [], |
| "last": "K\u00f6per", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "353--362", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K\u00f6per, Maximilian, and Sabine Schulte im Walde. 2016. Distinguishing literal and non-literal usage of German particle verbs. In HLT-NAACL, pages 353-362.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Dead parrots make bad pets: Exploring modifier effects in noun phrases", |
| "authors": [ |
| { |
| "first": "Germ\u00e1n", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Third Joint Conference on Lexical and Computational Semantics, *SEM@COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "171--181", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kruszewski, Germ\u00e1n, and Marco Baroni. 2014. Dead parrots make bad pets: Exploring modifier effects in noun phrases. In Proceedings of the Third Joint Conference on Lexical and Computational Semantics, *SEM@COLING 2014, August 23-24, 2014, pages 171-181, The *SEM 2014 Organizing Committee, Dublin.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "An introduction to latent semantic analysis", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [ |
| "K" |
| ], |
| "last": "Landauer", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Darrell", |
| "middle": [], |
| "last": "Foltz", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Laham", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Discourse Processes", |
| "volume": "25", |
| "issue": "", |
| "pages": "259--284", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Landauer, Thomas K., Peter W. Foltz, and Darrell Laham. 1998. An introduction to latent semantic analysis. Discourse Processes, 25(2-3):259-284.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "A large scale evaluation of distributional semantic models: Parameters, interactions and model selection", |
| "authors": [ |
| { |
| "first": "Gabriella", |
| "middle": [], |
| "last": "Lapesa", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Evert", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "531--545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lapesa, Gabriella, and Stefan Evert. 2014. A large scale evaluation of distributional semantic models: Parameters, interactions and model selection. Transactions of the Association for Computational Linguistics, 2:531-545.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Large-scale evaluation of dependency-based DSMs: Are they worth the effort?", |
| "authors": [ |
| { |
| "first": "Gabriella", |
| "middle": [], |
| "last": "Lapesa", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Evert", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "EACL 2017", |
| "volume": "", |
| "issue": "", |
| "pages": "394--400", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lapesa, Gabriella, and Stefan Evert. 2017. Large-scale evaluation of dependency-based DSMs: Are they worth the effort? In EACL 2017, pages 394-400.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "How much is enough?: Data requirements for statistical NLP. CoRR", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Lauer", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lauer, Mark. 1995. How much is enough?: Data requirements for statistical NLP. CoRR, abs/cmp-lg/9509001.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Improving distributional similarity with lessons learned from word embeddings", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "211--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Levy, Omer, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211-225.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Automatic retrieval and clustering of similar words", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 17th International Conference on Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "768--774", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, Dekang. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th International Conference on Computational Linguistics (Volume 2), pages 768-774.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Automatic identification of non-compositional phrases", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "317--324", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, Dekang. 1999. Automatic identification of non-compositional phrases. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, pages 317-324.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Detecting a continuum of compositionality in phrasal verbs", |
| "authors": [ |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Keller", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Carroll", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the Association for Computational Linguistics 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment", |
| "volume": "", |
| "issue": "", |
| "pages": "73--80", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McCarthy, Diana, Bill Keller, and John Carroll. 2003. Detecting a continuum of compositionality in phrasal verbs. In Proceedings of the Association for Computational Linguistics 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment, pages 73-80, Association for Computational Linguistics, Sapporo, Japan.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Linguistic regularities in continuous space word representations", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Wen-Tau", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Zweig", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "746--751", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikolov, Tomas, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In HLT-NAACL, pages 746-751.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Vector-based models of semantic composition", |
| "authors": [ |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "236--244", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell, Jeff, and Mirella Lapata. 2008. Vector-based models of semantic composition. In Association for Computational Linguistics, pages 236-244.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Composition in distributional models of semantics", |
| "authors": [ |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Cognitive Science", |
| "volume": "34", |
| "issue": "8", |
| "pages": "1388--1429", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell, Jeff, and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388-1429.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Distributional measures of semantic distance: A survey", |
| "authors": [ |
| { |
| "first": "Saif", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "Graeme", |
| "middle": [], |
| "last": "Hirst", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohammad, Saif, and Graeme Hirst. 2012. Distributional measures of semantic distance: A survey. CoRR, abs/1203.1858.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Paraphrasing verbs for noun compound interpretation", |
| "authors": [ |
| { |
| "first": "Preslav", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the LREC Workshop Towards a Shared Task for MWEs", |
| "volume": "", |
| "issue": "", |
| "pages": "46--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nakov, Preslav. 2008. Paraphrasing verbs for noun compound interpretation. In Proceedings of the LREC Workshop Towards a Shared Task for MWEs, pages 46-49.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "On the interpretation of noun compounds: Syntax, semantics, and entailment", |
| "authors": [ |
| { |
| "first": "Preslav", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Natural Language Engineering", |
| "volume": "19", |
| "issue": "", |
| "pages": "291--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nakov, Preslav. 2013. On the interpretation of noun compounds: Syntax, semantics, and entailment. Natural Language Engineering, 19:291-330.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Maltparser: A data-driven parser-generator for dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Conference on Language Resources and Evaluation", |
| "volume": "6", |
| "issue": "", |
| "pages": "2216--2219", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nivre, Joakim, Johan Hall, and Jens Nilsson. 2006. Maltparser: A data-driven parser-generator for dependency parsing. In Proceedings of the Conference on Language Resources and Evaluation (Volume 6), pages 2216-2219.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Constructing semantic space models from parsed corpora", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Pad\u00f3", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "128--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pad\u00f3, Sebastian, and Mirella Lapata. 2003. Constructing semantic space models from parsed corpora. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (Volume 1), pages 128-135.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Dependency-based construction of semantic space models", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Pad\u00f3", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "2", |
| "pages": "161--199", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pad\u00f3, Sebastian, and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161-199.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Nothing like good old frequency: Studying context filters for distributional thesauri", |
| "authors": [ |
| { |
| "first": "Muntsa", |
| "middle": [], |
| "last": "Padr\u00f3", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Idiart", |
| "suffix": "" |
| }, |
| { |
| "first": "Aline", |
| "middle": [], |
| "last": "Villavicencio", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| }, |
| { |
| "first": ";", |
| "middle": [], |
| "last": "Reykjavik", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Padr\u00f3", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Muntsa", |
| "suffix": "" |
| }, |
| { |
| "first": "Aline", |
| "middle": [], |
| "last": "Idiart", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Villavicencio", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Padr\u00f3, Muntsa, Marco Idiart, Aline Villavicencio, and Carlos Ramisch. 2014a. Comparing similarity measures for distributional thesauri. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014), pages 2964-2971, European Language Resources Association, Reykjavik. Padr\u00f3, Muntsa, Marco Idiart, Aline Villavicencio, and Carlos Ramisch. 2014b. Nothing like good old frequency: Studying context filters for distributional thesauri. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (Short Papers), pages 419-424, Doha, Qatar. Pennington, Jeffrey, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Association for Computational Linguistics, Doha, Qatar.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "How naked is the naked truth? A multilingual lexicon of nominal compound compositionality", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvio", |
| "middle": [], |
| "last": "Cordeiro", |
| "suffix": "" |
| }, |
| { |
| "first": "Leonardo", |
| "middle": [], |
| "last": "Zilio", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Idiart", |
| "suffix": "" |
| }, |
| { |
| "first": "Aline", |
| "middle": [], |
| "last": "Villavicencio", |
| "suffix": "" |
| }, |
| { |
| "first": "Rodrigo", |
| "middle": [], |
| "last": "Wilkens", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "The 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "156--161", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramisch, Carlos, Silvio Cordeiro, Leonardo Zilio, Marco Idiart, Aline Villavicencio, and Rodrigo Wilkens. 2016. How naked is the naked truth? A multilingual lexicon of nominal compound compositionality. In The 54th Annual Meeting of the Association for Computational Linguistics, pages 156-161.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Filtering and measuring the intrinsic quality of human compositionality judgments", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvio", |
| "middle": [ |
| "Ricardo" |
| ], |
| "last": "Cordeiro", |
| "suffix": "" |
| }, |
| { |
| "first": "Aline", |
| "middle": [], |
| "last": "Villavicencio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 12th Workshop on Multiword Expressions (MWE 2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "32--37", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramisch, Carlos, Silvio Ricardo Cordeiro, and Aline Villavicencio. 2016. Filtering and measuring the intrinsic quality of human compositionality judgments. In Proceedings of the 12th Workshop on Multiword Expressions (MWE 2016), pages 32-37, Berlin.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "An empirical study on compositionality in compound nouns", |
| "authors": [ |
| { |
| "first": "Siva", |
| "middle": [], |
| "last": "Reddy", |
| "suffix": "" |
| }, |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Suresh", |
| "middle": [], |
| "last": "Manandhar", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 5th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "210--218", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reddy, Siva, Diana McCarthy, and Suresh Manandhar. 2011. An empirical study on compositionality in compound nouns. In Proceedings of the 5th International Joint Conference on Natural Language Processing 2011 (IJCNLP 2011), pages 210-218, Chiang Mai, Thailand.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Improving statistical machine translation using domain bilingual multiword expressions", |
| "authors": [ |
| { |
| "first": "Zhixiang", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Yajuan", |
| "middle": [], |
| "last": "L\u00fc", |
| "suffix": "" |
| }, |
| { |
| "first": "Jie", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yun", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the ACL 2009 Workshop on MWEs", |
| "volume": "", |
| "issue": "", |
| "pages": "47--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ren, Zhixiang, Yajuan L\u00fc, Jie Cao, Qun Liu, and Yun Huang. 2009. Improving statistical machine translation using domain bilingual multiword expressions. In Proceedings of the ACL 2009 Workshop on MWEs, pages 47-54, Singapore.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "A single word is not enough: Ranking multiword expressions using distributional semantics", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Riedl", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Biemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2430--2440", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Riedl, Martin, and Chris Biemann. 2015. A single word is not enough: Ranking multiword expressions using distributional semantics. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2430-2440, Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Feature norms of German noun compounds", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Roller", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 10th Workshop on Multiword Expressions (MWE)", |
| "volume": "", |
| "issue": "", |
| "pages": "104--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roller, Stephen, and Sabine Schulte im Walde. 2014. Feature norms of German noun compounds. In Proceedings of the 10th Workshop on Multiword Expressions (MWE), pages 104-108, Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "The (un)expected effects of applying standard cleansing models to human ratings on compositionality", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Roller", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| }, |
| { |
| "first": "Silke", |
| "middle": [], |
| "last": "Scheible", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 9th Workshop on Multiword Expressions", |
| "volume": "", |
| "issue": "", |
| "pages": "32--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roller, Stephen, Sabine Schulte im Walde, and Silke Scheible. 2013. The (un)expected effects of applying standard cleansing models to human ratings on compositionality. In Proceedings of the 9th Workshop on Multiword Expressions, pages 32-41, Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "Multiword expressions: A pain in the neck for NLP", |
| "authors": [ |
| { |
| "first": "Ivan", |
| "middle": [ |
| "A" |
| ], |
| "last": "Sag", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Bond", |
| "suffix": "" |
| }, |
| { |
| "first": "Ann", |
| "middle": [], |
| "last": "Copestake", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Flickinger", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Computational Linguistics and Intelligent Text Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1--15", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sag, Ivan A, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002, Multiword expressions: A pain in the neck for NLP. In Computational Linguistics and Intelligent Text Processing. Springer, New York, pages 1-15.", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "Using distributional similarity of multi-way translations to predict multiword expression compositionality", |
| "authors": [ |
| { |
| "first": "Bahar", |
| "middle": [], |
| "last": "Salehi", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "472--481", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Salehi, Bahar, Paul Cook, and Timothy Baldwin. 2014. Using distributional similarity of multi-way translations to predict multiword expression compositionality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 472-481, Gothenburg, Sweden.", |
| "links": null |
| }, |
| "BIBREF66": { |
| "ref_id": "b66", |
| "title": "A word embedding approach to predicting the compositionality of multiword expressions", |
| "authors": [ |
| { |
| "first": "Bahar", |
| "middle": [], |
| "last": "Salehi", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "977--983", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Salehi, Bahar, Paul Cook, and Timothy Baldwin. 2015. A word embedding approach to predicting the compositionality of multiword expressions. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 977-983, Denver.", |
| "links": null |
| }, |
| "BIBREF67": { |
| "ref_id": "b67", |
| "title": "The impact of multiword expression compositionality on machine translation evaluation", |
| "authors": [ |
| { |
| "first": "Bahar", |
| "middle": [], |
| "last": "Salehi", |
| "suffix": "" |
| }, |
| { |
| "first": "Nitika", |
| "middle": [], |
| "last": "Mathur", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 11th Workshop on Multiword Expressions", |
| "volume": "", |
| "issue": "", |
| "pages": "54--59", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Salehi, Bahar, Nitika Mathur, Paul Cook, and Timothy Baldwin. 2015. The impact of multiword expression compositionality on machine translation evaluation. In Proceedings of the 11th Workshop on Multiword Expressions, pages 54-59, Association for Computational Linguistics, Denver.", |
| "links": null |
| }, |
| "BIBREF68": { |
| "ref_id": "b68", |
| "title": "Matrix factorization using window sampling and negative sampling for improved word representations", |
| "authors": [ |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Salle", |
| "suffix": "" |
| }, |
| { |
| "first": "Aline", |
| "middle": [], |
| "last": "Villavicencio", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Idiart", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "419--424", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Salle, Alexandre, Aline Villavicencio, and Marco Idiart. 2016. Matrix factorization using window sampling and negative sampling for improved word representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 419-424, Berlin.", |
| "links": null |
| }, |
| "BIBREF69": { |
| "ref_id": "b69", |
| "title": "Treetagger-A language independent part-of-speech tagger", |
| "authors": [ |
| { |
| "first": "Helmut", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "43", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schmid, Helmut. 1995. Treetagger-A language independent part-of-speech tagger. Institut f\u00fcr Maschinelle Sprachverarbeitung, Universit\u00e4t Stuttgart, 43:28.", |
| "links": null |
| }, |
| "BIBREF70": { |
| "ref_id": "b70", |
| "title": "SemEval 2016 task 10: Detecting minimal semantic units and their meanings (DiMSUM)", |
| "authors": [ |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "Dirk", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Johannsen", |
| "suffix": "" |
| }, |
| { |
| "first": "Marine", |
| "middle": [], |
| "last": "Carpuat", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of SemEval", |
| "volume": "", |
| "issue": "", |
| "pages": "546--559", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schneider, Nathan, Dirk Hovy, Anders Johannsen, and Marine Carpuat. 2016. SemEval 2016 task 10: Detecting minimal semantic units and their meanings (DiMSUM). In Proceedings of SemEval, pages 546-559, San Diego.", |
| "links": null |
| }, |
| "BIBREF71": { |
| "ref_id": "b71", |
| "title": "Is knowledge-free induction of multiword unit dictionary headwords a solved problem?", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Schone", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "100--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schone, Patrick, and Daniel Jurafsky. 2001. Is knowledge-free induction of multiword unit dictionary headwords a solved problem? In Proceedings of Empirical Methods in Natural Language Processing, pages 100-108, Pittsburgh.", |
| "links": null |
| }, |
| "BIBREF72": { |
| "ref_id": "b72", |
| "title": "GhoSt-NN: A representative gold standard of German noun-noun compound", |
| "authors": [ |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "H\u00e4tty", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Bott", |
| "suffix": "" |
| }, |
| { |
| "first": "Nana", |
| "middle": [], |
| "last": "Khvtisavrishvili", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "2285--2292", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schulte im Walde, Sabine, Anna H\u00e4tty, Stefan Bott, and Nana Khvtisavrishvili. 2016. GhoSt-NN: A representative gold standard of German noun-noun compound. In Proceedings of the Conference on Language Resources and Evaluation, pages 2285-2292.", |
| "links": null |
| }, |
| "BIBREF73": { |
| "ref_id": "b73", |
| "title": "Exploring vector space models to predict the compositionality of German noun-noun compounds", |
| "authors": [ |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "M\u00fcller", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Roller", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of *SEM 2013", |
| "volume": "1", |
| "issue": "", |
| "pages": "255--265", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schulte im Walde, Sabine, Stefan M\u00fcller, and Stefan Roller. 2013. Exploring vector space models to predict the compositionality of German noun-noun compounds. In Proceedings of *SEM 2013 (Volume 1), pages 255-265. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF74": { |
| "ref_id": "b74", |
| "title": "Semantic compositionality through recursive matrix-vector spaces", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Brody", |
| "middle": [], |
| "last": "Huval", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1201--1211", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Socher, Richard, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201-1211.", |
| "links": null |
| }, |
| "BIBREF75": { |
| "ref_id": "b75", |
| "title": "Generation of compound words in statistical machine translation into compounding languages", |
| "authors": [ |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Stymne", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Cancedda", |
| "suffix": "" |
| }, |
| { |
| "first": "Lars", |
| "middle": [], |
| "last": "Ahrenberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Computational Linguistics", |
| "volume": "39", |
| "issue": "4", |
| "pages": "1067--1108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stymne, Sara, Nicola Cancedda, and Lars Ahrenberg. 2013. Generation of compound words in statistical machine translation into compounding languages. Computational Linguistics, 39(4):1067-1108.", |
| "links": null |
| }, |
| "BIBREF76": { |
| "ref_id": "b76", |
| "title": "Extraction of multi-word expressions from small parallel corpora", |
| "authors": [ |
| { |
| "first": "Yulia", |
| "middle": [], |
| "last": "Tsvetkov", |
| "suffix": "" |
| }, |
| { |
| "first": "Shuly", |
| "middle": [], |
| "last": "Wintner", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Natural Language Engineering", |
| "volume": "18", |
| "issue": "04", |
| "pages": "549--573", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tsvetkov, Yulia, and Shuly Wintner. 2012. Extraction of multi-word expressions from small parallel corpora. Natural Language Engineering, 18(04):549-573.", |
| "links": null |
| }, |
| "BIBREF77": { |
| "ref_id": "b77", |
| "title": "From frequency to meaning: vector space models of semantics", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [ |
| "D" |
| ], |
| "last": "Turney", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "37", |
| "issue": "1", |
| "pages": "141--188", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Turney, Peter D., and Patrick Pantel. 2010. From frequency to meaning: vector space models of semantics. Journal of Artificial Intelligence Research, 37(1):141-188.", |
| "links": null |
| }, |
| "BIBREF78": { |
| "ref_id": "b78", |
| "title": "Multiway tensor factorization for unsupervised lexical acquisition", |
| "authors": [ |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Van De Cruys", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Rimell", |
| "suffix": "" |
| }, |
| { |
| "first": "Thierry", |
| "middle": [], |
| "last": "Poibeau", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "COLING 2012", |
| "volume": "", |
| "issue": "", |
| "pages": "2703--2720", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Van de Cruys, Tim, Laura Rimell, Thierry Poibeau, and Anna Korhonen. 2012. Multiway tensor factorization for unsupervised lexical acquisition. In COLING 2012, pages 2703-2720.", |
| "links": null |
| }, |
| "BIBREF79": { |
| "ref_id": "b79", |
| "title": "Learning semantic composition to detect non-compositionality of multiword expressions", |
| "authors": [ |
| { |
| "first": "Majid", |
| "middle": [], |
| "last": "Yazdani", |
| "suffix": "" |
| }, |
| { |
| "first": "Meghdad", |
| "middle": [], |
| "last": "Farahmand", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Henderson", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1733--1742", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yazdani, Majid, Meghdad Farahmand, and James Henderson. 2015. Learning semantic composition to detect non-compositionality of multiword expressions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1733-1742, Association for Computational Linguistics, Lisbon.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "deviations (\u03c3 H , \u03c3 M , and \u03c3 HM ) as a function of hc HM in FR-comp. Right: Average compositionality (hc H , hc M , and hc HM ) as a function of hc HM in FR-comp.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "mean \u2297 = geometric mean Linear regr. of arith. mean Linear regr. of geom. mean", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "text": "Figure 2 Relation between hc H \u2297 hc M and hc HM in FR-comp, using arithmetic and geometric means.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "text": "Schema of a compositionality prediction configuration based on a composition function. Thick arrows indicate corpus-based vectors of two-word compounds treated as a single token. The schema also covers the evaluation of the compositionality prediction configuration (top right).", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "text": "I -t h r e s h / l e m m a P P M I -t h r e s h / s u r f a c e + P P M I -t h r e s h / s u r f a c e P P M I -t h r e s h / l e m m a P o S w 2 v -c b o w / l e m m a P o S w 2 v -s g / l e m m a P o S w 2 v -c b o w / l e m m a w 2 v -c b o w / s u r f a c e w 2 v -s g / l e m m a w 2 v -s g / s u r f a c e w 2 v -c b o w / s u r f a c e + w 2 v -s g / s u r f a c e +", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "num": null, |
| "text": "Results with highest Spearman for oracle and cross-validation, the latter with a confidence interval of 95%; (a) top left: overall Spearman correlations per DSM and language, (b) top right: different WORDFORM values and DSMs for English, (c) bottom left: different DIMENSION values and DSMs for French, and (d) bottom right: different WINDOWSIZE values and DSMs for Portuguese.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "num": null, |
| "text": "\u03c1 for increasing corpus sizes for PPMI-thresh (left) and w2v-sg (right) for EN-comp in red, FR-comp in blue, and PT-comp in green. Corpus sizes are in the x-axis in billion words. Curves for PPMI-thresh show average and standard deviation (error bars) across 8 samplings of the corpus.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF8": { |
| "num": null, |
| "text": "Distribution of improv maxsim (y-axis) as a function of rk human (x-axis). Outliers are indicated by numbers 1-8 (positive improvement) and letters A-H (negative improvement).", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF10": { |
| "num": null, |
| "text": "Distribution of improv geom (y-axis) as a function of rk human (x-axis). Outliers are indicated by numbers 1-8 (positive improvement) and letters A-H (negative improvement).", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF3": { |
| "text": "Farahmand .487/.424 .435/.376 .472/.404 .400/.358 .449/.431 .512/.471 .507/.468 Reddy (r) .738/.726 .732/.717 .762/.768 .783/.787 .787/.787 .803/.798 .814/.814", |
| "content": "<table><tr><td/><td>PPMI-SVD</td><td>PPMI-TopK</td><td>PPMI-thresh</td><td>glove</td><td>lexvec</td><td>w2v-cbow</td><td>w2v-sg</td></tr><tr><td>Reddy (\u03c1)</td><td colspan=\"7\">.743/.743 .706/.716 .791/.803 .754/.759 .774/.773 .796/.796 .812/.812</td></tr><tr><td>EN-comp</td><td colspan=\"7\">.655/.666 .624/.632 .688/.704 .638/.651 .646/.658 .716/.730 .726/.741</td></tr><tr><td>FR-comp</td><td>.584</td><td>.550</td><td>.702</td><td>.680</td><td>.677</td><td>.652</td><td>.653</td></tr><tr><td>PT-comp</td><td>.530</td><td>.519</td><td>.602</td><td>.555</td><td>.570</td><td>.588</td><td>.586</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF4": { |
| "text": "Configurations with best performances on EN-comp and on EN-comp Ext . Best performances are measured on EN-comp and the corresponding configurations are applied to EN-comp Ext .", |
| "content": "<table><tr><td>DSM</td><td colspan=\"5\">WORDFORM WINDOWSIZE DIMENSION \u03c1 EN-comp \u03c1 EN-comp Ext</td></tr><tr><td>PPMI-SVD</td><td>surface</td><td>1+1</td><td>250</td><td>0.655</td><td>0.692</td></tr><tr><td>PPMI-TopK</td><td>lemma PoS</td><td>8+8</td><td>1,000</td><td>0.624</td><td>0.680</td></tr><tr><td>PPMI-thresh</td><td>lemma PoS</td><td>8+8</td><td>750</td><td>0.688</td><td>0.675</td></tr><tr><td>glove</td><td>lemma PoS</td><td>8+8</td><td>500</td><td>0.637</td><td>0.670</td></tr><tr><td>lexvec w2v-cbow</td><td>lemma PoS surface +</td><td>8+8 1+1</td><td>250 750</td><td>0.646 0.716</td><td>0.685 0.731</td></tr><tr><td>w2v-sg</td><td>surface +</td><td>1+1</td><td>750</td><td>0.726</td><td>0.733</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF5": { |
| "text": "Data setpc uniform pc maxsim pc geom pc arith pc head pc mod", |
| "content": "<table><tr><td>EN-comp</td><td>.726</td><td>.730</td><td>.677</td><td>.718</td><td>.555</td><td>.677</td></tr><tr><td>FR-comp</td><td>.702</td><td>.693</td><td>.699</td><td>.703</td><td>.617</td><td>.645</td></tr><tr><td>PT-comp</td><td>.602</td><td>.590</td><td>.580</td><td>.598</td><td>.558</td><td>.486</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF6": { |
| "text": "DSM and Separman \u03c1 of pc maxsim , as well as the average weights for the head (\u03b2) and for the modifier (1 \u2212 \u03b2) on each data set.", |
| "content": "<table><tr><td>Data set</td><td>DSM</td><td colspan=\"3\">\u03c1 maxsim \u03b2 (head) 1 \u2212 \u03b2 (mod.)</td></tr><tr><td colspan=\"2\">EN-comp w2v-sg</td><td>.730</td><td>.55</td><td>.45</td></tr><tr><td colspan=\"2\">FR-comp PPMI-thresh</td><td>.693</td><td>.68</td><td>.32</td></tr><tr><td>PT-comp</td><td>w2v-sg</td><td>.590</td><td>.68</td><td>.32</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF7": { |
| "text": "Outlier compounds with extreme positive/negative improv maxsim values. Example identifiers correspond to numbers/letters shown inFigure 9.improv maxsim for PPMI-thresh", |
| "content": "<table><tr><td>ID</td><td>improv</td><td>hc HM</td><td>Compound 'translation' (gloss)</td></tr><tr><td>1</td><td>+90</td><td>2.82</td><td>FR premier plan 'foreground' (lit. first plan)</td></tr><tr><td>2</td><td>+88</td><td>2.90</td><td>FR mati\u00e9re premi\u00e9re 'raw material' (lit. matter primary)</td></tr><tr><td>3</td><td>+86</td><td>2.89</td><td>PT amigo oculto 'secret Santa' (lit. friend hidden)</td></tr><tr><td>4</td><td>+67</td><td>1.92</td><td>FR premi\u00e9re dame 'first lady' (lit. first lady)</td></tr><tr><td>5</td><td>+63</td><td>3.19</td><td>PT caixa forte 'safe, vault' (lit. box strong)</td></tr><tr><td>6</td><td>+58</td><td>3.14</td><td>PT prato feito 'blue-plate special' (lit. plate ready-made)</td></tr><tr><td>7</td><td>+53</td><td>2.90</td><td>FR id\u00e9e re\u00e7ue 'popular belief' (lit. idea received)</td></tr><tr><td>8</td><td>+48</td><td>3.00</td><td>FR mar\u00e9e noire 'oil spill' (lit. tide black)</td></tr><tr><td>H</td><td>\u221242</td><td>1.52</td><td>PT alta costura 'haute couture' (lit. high sewing)</td></tr><tr><td>G</td><td>\u221244</td><td>2.84</td><td>EN half sister</td></tr><tr><td>F</td><td>\u221244</td><td>0.54</td><td>EN melting pot</td></tr><tr><td>E</td><td>\u221246</td><td>1.29</td><td>FR berger allemand 'German shepherd' (lit. shepherd German)</td></tr><tr><td>D</td><td>\u221252</td><td>2.87</td><td>PT mar aberto 'open sea' (lit. sea open)</td></tr><tr><td>C</td><td>\u221255</td><td>1.43</td><td>PT febre amarela 'yellow fever' (lit. fever yellow)</td></tr><tr><td>B</td><td>\u221281</td><td>0.79</td><td>PT livro aberto 'open book' (lit. book open)</td></tr><tr><td>A</td><td>\u221283</td><td>1.06</td><td>PT cora\u00e7\u00e3o partido 'broken heart' (lit. heart broken)</td></tr><tr><td/><td/><td/><td>improv maxsim for w2v-sg</td></tr><tr><td>ID</td><td>improv</td><td>hc HM</td><td>Compound 'translation' (gloss)</td></tr><tr><td>1</td><td>+138</td><td>3.58</td><td>PT cerca viva 'hedge' (lit. fence living)</td></tr><tr><td>2</td><td>+126</td><td>3.67</td><td>FR coffre fort 'safe, vault' (lit. chest/box strong)</td></tr><tr><td>3</td><td>+116</td><td>3.19</td><td>PT caixa forte 'safe, vault' (lit. chest/box strong)</td></tr><tr><td>4</td><td>+107</td><td>2.03</td><td>PT golpe baixo 'low blow' (lit. punch low)</td></tr><tr><td>5</td><td>+100</td><td>3.97</td><td>PT primeira necessidade 'first necessity' (lit. first necessity)</td></tr><tr><td>6</td><td>+95</td><td>4.11</td><td>EN role model</td></tr><tr><td>7</td><td>+79</td><td>4.47</td><td>FR bonne pratique 'good practice' (lit. good practice)</td></tr><tr><td>8</td><td>+69</td><td>3.64</td><td>PT carta aberta 'open letter' (lit. letter open)</td></tr><tr><td>H</td><td>\u221268</td><td>0.40</td><td>FR bras droit 'most important helper/assistant' (lit. arm right)</td></tr><tr><td>G</td><td>\u221270</td><td>1.52</td><td>PT alta costura 'haute couture' (lit. high sewing)</td></tr><tr><td>F</td><td>\u221271</td><td>3.66</td><td>PT carne vermelha 'red meat' (lit. meat red)</td></tr><tr><td>E</td><td>\u221282</td><td>1.35</td><td>PT alto mar 'high seas' (lit. high sea)</td></tr><tr><td>D</td><td>\u221285</td><td>1.10</td><td>PT mesa redonda 'round table' (lit. table round)</td></tr><tr><td>C</td><td>\u221286</td><td>2.84</td><td>EN half sister</td></tr><tr><td>B</td><td>\u2212109</td><td>1.43</td><td>PT febre amarela 'yellow fever' (lit. fever yellow)</td></tr><tr><td>A</td><td>\u2212128</td><td>1.06</td><td>PT cora\u00e7\u00e3o partido 'broken heart' (lit. heart broken)</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF8": { |
| "text": "Outlier compounds with extreme positive/negative improv geom values. Example identifiers correspond to numbers/letters shown onFigure 10. improv geom for PPMI-thresh", |
| "content": "<table><tr><td>ID</td><td>improv</td><td>hc HM</td><td>Compound 'translation' (gloss)</td></tr><tr><td>1</td><td>+157</td><td>1.31</td><td>EN snail mail</td></tr><tr><td>2</td><td>+110</td><td>3.43</td><td>FR guerre civile 'civil war' (lit. war civil)</td></tr><tr><td>3</td><td>+109</td><td>2.83</td><td>FR disque dur 'hard drive' (lit. disk hard)</td></tr><tr><td>4</td><td>+104</td><td>1.35</td><td>PT alto mar 'high seas' (lit. high sea)</td></tr><tr><td>5</td><td>+93</td><td>2.63</td><td>PT\u00f4nibus executivo 'minibus' (lit. bus executive)</td></tr><tr><td>6</td><td>+85</td><td>3.32</td><td>EN search engine</td></tr><tr><td>7</td><td>+82</td><td>2.62</td><td>PT carro forte 'armored car' (lit. car strong)</td></tr><tr><td>8</td><td>+79</td><td>1.18</td><td>EN noble gas</td></tr><tr><td>H</td><td>\u2212190</td><td>2.44</td><td>PT ar condicionado 'air conditioning' (lit. air conditioned)</td></tr><tr><td>G</td><td>\u2212202</td><td>3.67</td><td>FR coffre fort 'safe, vault' (lit. chest/box strong)</td></tr><tr><td>F</td><td>\u2212202</td><td>3.57</td><td>FR bon sens 'common sense' (lit. good sense)</td></tr><tr><td>E</td><td>\u2212234</td><td>3.14</td><td>PT prato feito 'blue-plate special' (lit. plate ready-made)</td></tr><tr><td>D</td><td>\u2212292</td><td>3.64</td><td>FR baie vitr\u00e9e 'open glass window' (lit. opening glassy)</td></tr><tr><td>C</td><td>\u2212327</td><td>3.64</td><td>PT carta aberta 'open letter' (lit. letter open)</td></tr><tr><td>B</td><td>\u2212370</td><td>4.08</td><td>PT vinho tinto 'red wine' (lit. wine dark-red)</td></tr><tr><td>A</td><td>\u2212376</td><td>1.69</td><td>PT circuito integrado 'short circuit' (lit. short circuit)</td></tr><tr><td/><td/><td/><td>improv geom for w2v-sg</td></tr><tr><td>ID</td><td>improv</td><td>hc HM</td><td>Compound 'translation' (gloss)</td></tr><tr><td>1</td><td>+228</td><td>0.40</td><td>FR bras droit 'most important helper/assistant' (lit. arm right)</td></tr><tr><td>2</td><td>+158</td><td>1.40</td><td>PT lua nova 'new moon' (lit. moon new)</td></tr><tr><td>3</td><td>+127</td><td>1.35</td><td>PT alto mar 'high seas' (lit. high sea)</td></tr><tr><td>4</td><td>+104</td><td>0.10</td><td>PT p\u00e9 direito 'ceiling height' (lit. foot right)</td></tr><tr><td>5</td><td>+89</td><td>1.24</td><td>EN carpet bombing</td></tr><tr><td>6</td><td>+75</td><td>1.60</td><td>PT lista negra 'black list' (lit. list black)</td></tr><tr><td>7</td><td>+73</td><td>0.65</td><td>PT arma branca 'cold weapon' (lit. weapon white)</td></tr><tr><td>8</td><td>+72</td><td>3.32</td><td>EN search engine</td></tr><tr><td>H</td><td>\u2212151</td><td>2.76</td><td>PT disco r\u00edgido 'hard drive' (lit. disk rigid)</td></tr><tr><td>G</td><td>\u2212169</td><td>4.63</td><td>EN subway system</td></tr><tr><td>F</td><td>\u2212190</td><td>2.62</td><td>PT carro forte 'armored car' (lit. car strong)</td></tr><tr><td>E</td><td>\u2212238</td><td>2.83</td><td>FR disque dur 'hard drive' (lit. disk hard)</td></tr><tr><td>D</td><td>\u2212256</td><td>2.84</td><td>EN half sister</td></tr><tr><td>C</td><td>\u2212260</td><td>3.64</td><td>PT carta aberta 'open letter' (lit. letter open)</td></tr><tr><td>B</td><td>\u2212266</td><td>4.47</td><td>FR bonne pratique 'good practice' (lit. good practice)</td></tr><tr><td>A</td><td>\u2212370</td><td>4.25</td><td>EN end user</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF10": { |
| "text": "Spearman's \u03c1 of best pc uniform models, separated into 3 ranges according to \u03c3 HM and according to hc HM , all with p < 0.05.", |
| "content": "<table><tr><td>DSM</td><td>full data set</td><td/><td colspan=\"2\">Ranges of \u03c3 HM</td><td colspan=\"2\">Ranges of hc HM</td><td/></tr><tr><td/><td/><td>low</td><td>mid</td><td>high</td><td>low</td><td>mid</td><td>high</td></tr><tr><td>PPMI-thresh</td><td>0.66</td><td>0.75</td><td>0.58</td><td>0.40</td><td>0.29</td><td>0.24</td><td>0.37</td></tr><tr><td>glove</td><td>0.63</td><td>0.73</td><td>0.54</td><td>0.35</td><td>0.27</td><td>0.26</td><td>0.35</td></tr><tr><td>lexvec</td><td>0.64</td><td>0.73</td><td>0.54</td><td>0.36</td><td>0.18</td><td>0.20</td><td>0.37</td></tr><tr><td>w2v-sg</td><td>0.66</td><td>0.73</td><td>0.58</td><td>0.43</td><td>0.16</td><td>0.24</td><td>0.32</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF12": { |
| "text": "Results using a higher number of iterations.", |
| "content": "<table><tr><td>Model (FR-comp)</td><td>\u03c1 base</td><td>\u03c1 iter=100</td><td>Difference (%)</td></tr><tr><td>w2v-cbow</td><td>.660</td><td>.640</td><td>(\u22122.0)</td></tr><tr><td>w2v-sg</td><td>.672</td><td>.636</td><td>(\u22123.7)</td></tr><tr><td>glove</td><td>.680</td><td>.677</td><td>(\u22120.3)</td></tr><tr><td>lexvec</td><td>.677</td><td>.671</td><td>(\u22120.6)</td></tr><tr><td>Model (Reddy)</td><td>\u03c1 base</td><td>\u03c1 iter=100</td><td>Difference (%)</td></tr><tr><td>w2v-cbow</td><td>.809</td><td>.766</td><td>(\u22124.3)</td></tr><tr><td>w2v-sg</td><td>.821</td><td>.777</td><td>(\u22124.4)</td></tr><tr><td>glove</td><td>.764</td><td>.746</td><td>(\u22121.8)</td></tr><tr><td>lexvec</td><td>.774</td><td>.757</td><td>(\u22121.7)</td></tr><tr><td>Model (PT-comp)</td><td>\u03c1 base</td><td>\u03c1 iter=100</td><td>Difference (%)</td></tr><tr><td>w2v-cbow</td><td>.588</td><td>.558</td><td>(\u22123.0)</td></tr><tr><td>w2v-sg</td><td>.586</td><td>.551</td><td>(\u22123.6)</td></tr><tr><td>glove</td><td>.555</td><td>.464</td><td>(\u22129.1)</td></tr><tr><td>lexvec</td><td>.570</td><td>.561</td><td>(\u22120.9)</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF13": { |
| "text": "Results for a higher minimum threshold of word count.", |
| "content": "<table><tr><td>Model (FR-comp)</td><td>\u03c1 base</td><td>\u03c1 mincount=50</td><td>Difference (%)</td></tr><tr><td>w2v-cbow</td><td>.660</td><td>.610</td><td>(\u22125.0)</td></tr><tr><td>w2v-sg</td><td>.672</td><td>.613</td><td>(\u22125.9)</td></tr><tr><td>glove</td><td>.680</td><td>.673</td><td>(\u22120.7)</td></tr><tr><td>PPMI-SVD</td><td>.584</td><td>.258</td><td>(\u221232.6)</td></tr><tr><td>lexvec</td><td>.677</td><td>.653</td><td>(\u22122.4)</td></tr><tr><td>Model (Reddy)</td><td>\u03c1 base</td><td>\u03c1 mincount=50</td><td>Difference (%)</td></tr><tr><td>w2v-cbow</td><td>.809</td><td>.778</td><td>(\u22123.1)</td></tr><tr><td>w2v-sg</td><td>.821</td><td>.776</td><td>(\u22124.5)</td></tr><tr><td>glove</td><td>.764</td><td>.672</td><td>(\u22129.2)</td></tr><tr><td>PPMI-SVD</td><td>.743</td><td>.515</td><td>(\u221222.8)</td></tr><tr><td>lexvec</td><td>.774</td><td>.738</td><td>(\u22123.6)</td></tr><tr><td>Model (PT-comp)</td><td>\u03c1 base</td><td>\u03c1 mincount=50</td><td>Difference (%)</td></tr><tr><td>w2v-cbow</td><td>.588</td><td>.580</td><td>(\u22120.8)</td></tr><tr><td>w2v-sg</td><td>.586</td><td>.575</td><td>(\u22121.1)</td></tr><tr><td>glove</td><td>.555</td><td>.540</td><td>(\u22121.5)</td></tr><tr><td>PPMI-SVD</td><td>.530</td><td>.418</td><td>(\u221211.1)</td></tr><tr><td>lexvec</td><td>.570</td><td>.566</td><td>(\u22120.4)</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF14": { |
| "text": "Results using a window of size 2+2.", |
| "content": "<table><tr><td>Model (FR-comp)</td><td>\u03c1 base</td><td>\u03c1 win=2+2</td><td>Difference (%)</td></tr><tr><td>PPMI-SVD</td><td>.584</td><td>.397</td><td>(\u221218.7)</td></tr><tr><td>PPMI-thresh</td><td>.702</td><td>.678</td><td>(\u22122.4)</td></tr><tr><td>glove</td><td>.680</td><td>.657</td><td>(\u22122.3)</td></tr><tr><td>lexvec</td><td>.677</td><td>.671</td><td>(\u22120.6)</td></tr><tr><td>w2v-cbow</td><td>.660</td><td>.644</td><td>(\u22121.6)</td></tr><tr><td>w2v-sg</td><td>.672</td><td>.639</td><td>(\u22123.3)</td></tr><tr><td>Model (Reddy)</td><td>\u03c1 base</td><td>\u03c1 win=2+2</td><td>Difference (%)</td></tr><tr><td>PPMI-SVD</td><td>.743</td><td>.583</td><td>(\u221216.0)</td></tr><tr><td>lexvec</td><td>.774</td><td>.757</td><td>(\u22121.7)</td></tr><tr><td>w2v-cbow</td><td>.809</td><td>.777</td><td>(\u22123.2)</td></tr><tr><td>w2v-sg</td><td>.821</td><td>.784</td><td>(\u22123.7)</td></tr><tr><td>Model (PT-comp)</td><td>\u03c1 base</td><td>\u03c1 win=2+2</td><td>Difference (%)</td></tr><tr><td>PPMI-SVD</td><td>.530</td><td>.446</td><td>(\u22128.4)</td></tr><tr><td>PPMI-thresh</td><td>.602</td><td>.561</td><td>(\u22124.1)</td></tr><tr><td>lexvec</td><td>.570</td><td>.564</td><td>(\u22120.6)</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF15": { |
| "text": "Results for higher numbers of dimensions (PPMI-thresh).", |
| "content": "<table><tr><td>Model (FR-comp)</td><td>\u03c1 dim=X</td><td>Difference (%)</td></tr><tr><td>dim = 250</td><td>.671</td><td>(\u22123.1)</td></tr><tr><td>dim = 500</td><td>.695</td><td>(\u22120.7)</td></tr><tr><td>dim = 750</td><td>.702</td><td>(0.0)</td></tr><tr><td>dim = 1,000</td><td>.694</td><td>(\u22120.8)</td></tr><tr><td>dim = 2,000</td><td>.645</td><td>(\u22125.8)</td></tr><tr><td>dim = 5,000</td><td>.636</td><td>(\u22126.7)</td></tr><tr><td>dim = 30,000</td><td>.552</td><td>(\u221215.1)</td></tr><tr><td>dim = 999,999</td><td>.539</td><td>(\u221216.3)</td></tr><tr><td>Model (Reddy)</td><td>\u03c1 dim=X</td><td>Difference (%)</td></tr><tr><td>dim = 250</td><td>.764</td><td>(\u22122.7)</td></tr><tr><td>dim = 500</td><td>.782</td><td>(\u22121.0)</td></tr><tr><td>dim = 750</td><td>.791</td><td>(0.0)</td></tr><tr><td>dim = 1,000</td><td>.784</td><td>(\u22120.7)</td></tr><tr><td>dim = 2,000</td><td>.760</td><td>(\u22123.1)</td></tr><tr><td>dim = 5,000</td><td>.744</td><td>(\u22124.7)</td></tr><tr><td>dim = 30,000</td><td>.700</td><td>(\u22129.1)</td></tr><tr><td>dim = 999,999</td><td>.566</td><td>(\u221222.5)</td></tr><tr><td>Model (PT-comp)</td><td>\u03c1 dim=X</td><td>Difference (%)</td></tr><tr><td>dim = 250</td><td>.543</td><td>(\u22125.9)</td></tr><tr><td>dim = 500</td><td>.546</td><td>(\u22125.6)</td></tr><tr><td>dim = 750</td><td>.602</td><td>(0.0)</td></tr><tr><td>dim = 1,000</td><td>.609</td><td>(+0.7)</td></tr><tr><td>dim = 2,000</td><td>.601</td><td>(\u22120.1)</td></tr><tr><td>dim = 5,000</td><td>.505</td><td>(\u22129.7)</td></tr><tr><td>dim = 30,000</td><td>.532</td><td>(\u22127.0)</td></tr><tr><td>dim = 999,999</td><td>.500</td><td>(\u221210.2)</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF16": { |
| "text": "Configurations with highest \u03c1 avg for nondeterministic models. Intrinsic quality measures for the raw and filtered data sets.", |
| "content": "<table><tr><td>Data set</td><td>DSM</td><td colspan=\"2\">configuration</td><td>\u03c1 1</td><td>\u03c1 2</td><td>\u03c1 3</td><td>\u03c1 avg stddev</td></tr><tr><td>Reddy</td><td>glove</td><td colspan=\"6\">lemma PoS .W 8 .d 250 .759 .760 .753 .757</td><td>.004</td></tr><tr><td/><td colspan=\"2\">w2v-cbow surface.W 1 .d 500</td><td colspan=\"5\">.796 .807 .799 .801</td><td>.006</td></tr><tr><td/><td>w2v-sg</td><td>surface.W 1 .d 750</td><td colspan=\"5\">.812 .788 .812 .804</td><td>.014</td></tr><tr><td colspan=\"8\">EN-comp glove w2v-cbow surface + .W 1 .d 750 lemma PoS .W 8 .d 500 .651 .646 .650 .649 .730 .732 .728 .730 w2v-sg surface + .W 1 .d 750 .741 .732 .721 .731</td><td>.003 .002 .010</td></tr><tr><td>Data set</td><td>\u03c3</td><td/><td colspan=\"2\">P \u03c3>1.5</td><td/><td/><td>DRR</td></tr><tr><td/><td>raw</td><td>filtered</td><td>raw</td><td/><td>filtered</td><td/></tr><tr><td>FR-comp</td><td>1.15</td><td>0.94</td><td colspan=\"2\">22.78%</td><td>13.89%</td><td/><td>87.34%</td></tr><tr><td>PT-comp</td><td>1.22</td><td>1.00</td><td colspan=\"2\">14.44%</td><td>6.11%</td><td/><td>87.81%</td></tr><tr><td>EN-comp 90</td><td>1.17</td><td>0.87</td><td colspan=\"2\">18.89%</td><td>3.33%</td><td/><td>83.61%</td></tr><tr><td>Reddy</td><td>0.99</td><td>-</td><td colspan=\"2\">5.56%</td><td>-</td><td/><td>-</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF17": { |
| "text": "Extrinsic quality measures for the raw and filtered data sets.", |
| "content": "<table><tr><td>Data set</td><td colspan=\"2\">EN-comp 90 raw filtered</td><td colspan=\"2\">FR-comp raw filtered</td><td colspan=\"2\">PT-comp raw filtered</td></tr><tr><td>PPMI-SVD</td><td>.604</td><td>.601</td><td>.584</td><td>.579</td><td>.530</td><td>.526</td></tr><tr><td>PPMI-TopK</td><td>.564</td><td>.571</td><td>.550</td><td>.545</td><td>.519</td><td>.516</td></tr><tr><td colspan=\"2\">PPMI-thresh .602</td><td>.607</td><td>.702</td><td>.700</td><td>.602</td><td>.601</td></tr><tr><td>glove</td><td>.538</td><td>.544</td><td>.680</td><td>.676</td><td>.555</td><td>.552</td></tr><tr><td>lexvec</td><td>.567</td><td>.572</td><td>.677</td><td>.676</td><td>.570</td><td>.568</td></tr><tr><td>w2v-cbow</td><td>.669</td><td>.665</td><td>.651</td><td>.651</td><td>.588</td><td>.587</td></tr><tr><td>w2v-sg</td><td>.665</td><td>.661</td><td>.653</td><td>.654</td><td>.586</td><td>.584</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF18": { |
| "text": "", |
| "content": "<table><tr><td>Compounds Compounds Compounds</td><td>hc HM</td><td colspan=\"2\">hc HM</td><td colspan=\"2\">Compounds</td><td colspan=\"2\">Compounds hc HM</td><td>hc HM</td></tr><tr><td>eager beaver double dutch</td><td>0.29</td><td/><td>0.36</td><td colspan=\"4\">market place marketing consultant 4.00</td><td>3.00</td></tr><tr><td>economic aid double whammy</td><td>2.48</td><td/><td>4.33</td><td colspan=\"4\">mental disorder medical procedure 4.83</td><td>4.89</td></tr><tr><td>elbow grease dream ticket</td><td>1.32</td><td/><td>0.56</td><td colspan=\"4\">middle school music festival 4.58</td><td>3.84</td></tr><tr><td>elbow room dutch courage</td><td>1.00</td><td/><td>0.61</td><td colspan=\"4\">milk tooth music journalist</td><td>4.54</td><td>1.43</td></tr><tr><td>entrance hall fair play</td><td>2.59</td><td/><td>4.17</td><td colspan=\"4\">mother tongue noise complaint 4.52</td><td>0.59</td></tr><tr><td>eternal rest fairy tale</td><td>1.68</td><td/><td>3.25</td><td colspan=\"2\">pain killer</td><td colspan=\"2\">narrow escape 2.17</td><td>1.75</td></tr><tr><td>fish story fall guy</td><td>1.36</td><td/><td>1.68</td><td colspan=\"4\">net income peace conference</td><td>4.46</td><td>2.94</td></tr><tr><td>flower child field work</td><td>2.10</td><td/><td>0.50</td><td colspan=\"2\">peace talk</td><td colspan=\"2\">news agency 4.13</td><td>4.39</td></tr><tr><td>food market football season</td><td>4.04</td><td/><td>3.82</td><td colspan=\"2\">pipe dream</td><td colspan=\"2\">noble gas</td><td>0.91</td><td>1.18</td></tr><tr><td>foot soldier fresh water</td><td>4.20</td><td/><td>1.95</td><td colspan=\"2\">poison pill</td><td colspan=\"2\">nut case</td><td>0.96</td><td>0.44</td></tr><tr><td>front man freudian slip</td><td>2.35</td><td/><td>1.64</td><td colspan=\"4\">old flame radioactive material</td><td>4.61</td><td>0.58</td></tr><tr><td>goose egg ghost town</td><td>1.50</td><td/><td>0.48</td><td colspan=\"4\">old hat radioactive waste</td><td>4.58</td><td>0.35</td></tr><tr><td>grey matter glass ceiling</td><td>0.81</td><td/><td>2.39</td><td colspan=\"2\">rainy season</td><td colspan=\"2\">old timer</td><td>4.23</td><td>0.89</td></tr><tr><td>guinea pig grass root</td><td>0.86</td><td/><td>0.45</td><td colspan=\"2\">rice paper</td><td colspan=\"2\">phone book</td><td>4.00</td><td>4.25</td></tr><tr><td>half sister hard drive</td><td>2.17</td><td/><td>2.84</td><td colspan=\"2\">shelf life</td><td colspan=\"2\">pillow slip</td><td>1.30</td><td>3.70</td></tr><tr><td>half wit hard shoulder</td><td>1.52</td><td/><td>1.16</td><td colspan=\"2\">skin tone</td><td colspan=\"2\">pocket book</td><td>3.88</td><td>1.42</td></tr><tr><td>health check head hunter</td><td>1.50</td><td/><td>4.17</td><td colspan=\"4\">prison guard smoke screen 1.11</td><td>4.89</td></tr><tr><td>high life health care</td><td>4.47</td><td/><td>1.67</td><td colspan=\"4\">prison term social insurance</td><td>2.83</td><td>4.79</td></tr><tr><td>inner circle heavy cross</td><td>1.17</td><td/><td>1.56</td><td colspan=\"2\">speed trap</td><td colspan=\"2\">private eye</td><td>3.71</td><td>0.82</td></tr><tr><td>inner product hen party</td><td>1.05</td><td/><td>3.00</td><td colspan=\"2\">stag night</td><td colspan=\"2\">record book</td><td>1.44</td><td>3.70</td></tr><tr><td>insane asylum home run</td><td>2.86</td><td/><td>3.95</td><td colspan=\"4\">research lab sugar daddy</td><td>0.44</td><td>4.75</td></tr><tr><td>insurance company honey trap</td><td>1.22</td><td/><td>5.00</td><td colspan=\"2\">tear gas</td><td colspan=\"2\">sex bomb</td><td>3.27</td><td>0.53</td></tr><tr><td>insurance policy hot potato</td><td>0.56</td><td/><td>4.15</td><td colspan=\"4\">silver lining time difference</td><td>4.41</td><td>0.35</td></tr><tr><td>iron collar incubation period</td><td>3.92</td><td/><td>3.88</td><td colspan=\"4\">sound judgement traffic control 3.69</td><td>3.39</td></tr><tr><td>labour union information age</td><td>3.40</td><td/><td>4.76</td><td colspan=\"2\">traffic jam</td><td colspan=\"2\">sparkling water 3.62</td><td>3.14</td></tr><tr><td>life belt injury time</td><td>3.20</td><td/><td>2.84</td><td colspan=\"2\">travel guide</td><td colspan=\"2\">street girl</td><td>4.38</td><td>3.16</td></tr><tr><td>life vest insider trading</td><td>3.88</td><td/><td>3.44</td><td colspan=\"4\">subway system wedding anniversary 4.86</td><td>4.63</td></tr><tr><td>lime tree jet lag</td><td>2.67</td><td/><td>4.61</td><td colspan=\"4\">tennis elbow wedding day 4.94</td><td>2.50</td></tr><tr><td>loan shark job fair</td><td>3.50</td><td/><td>1.00</td><td colspan=\"2\">white noise</td><td colspan=\"2\">top dog</td><td>1.17</td><td>1.05</td></tr><tr><td>loose woman leap year</td><td>2.38</td><td/><td>2.53</td><td colspan=\"2\">white spirit</td><td colspan=\"2\">wet blanket</td><td>1.31</td><td>0.21</td></tr><tr><td>mail service love song</td><td>4.58</td><td/><td>4.69</td><td colspan=\"4\">word painting winter solstice 4.55</td><td>1.62</td></tr><tr><td>low profile</td><td>2.10</td><td/><td/><td colspan=\"4\">world conference</td><td>3.96</td></tr><tr><td>D.2 Compounds</td><td/><td/><td colspan=\"2\">hc HM</td><td/><td/><td>Compounds</td><td>hc HM</td></tr><tr><td>academy award</td><td/><td/><td colspan=\"2\">3.52</td><td/><td/><td>blue blood</td><td>0.58</td></tr><tr><td>arcade game</td><td/><td/><td colspan=\"2\">3.80</td><td/><td/><td>blue print</td><td>1.04</td></tr><tr><td>Compounds baby blues backroom boy</td><td colspan=\"2\">hc HM</td><td colspan=\"2\">2.88 1.48</td><td colspan=\"3\">Compounds box office brain drain</td><td>hc HM</td><td>0.88 2.08</td></tr><tr><td>ancient history bad apple</td><td>1.95</td><td/><td colspan=\"2\">1.13</td><td colspan=\"3\">closed book bull market</td><td>0.68</td><td>1.23</td></tr><tr><td>armchair critic banana republic</td><td>1.33</td><td/><td colspan=\"2\">0.86</td><td colspan=\"3\">computer program cable car</td><td>4.50</td><td>2.68</td></tr><tr><td colspan=\"2\">baby buggy bankruptcy proceeding 3.94</td><td/><td colspan=\"2\">4.78</td><td colspan=\"3\">con artist calendar month</td><td>2.10</td><td>4.23</td></tr><tr><td>bad hat basket case</td><td>0.62</td><td/><td colspan=\"2\">0.42</td><td colspan=\"3\">cooking stove civil marriage</td><td>4.68</td><td>3.13</td></tr><tr><td>benign tumour beauty sleep</td><td>4.69</td><td/><td colspan=\"2\">2.96</td><td colspan=\"3\">cotton candy cocoa butter</td><td>1.79</td><td>3.23</td></tr><tr><td>big fish best man</td><td>0.85</td><td/><td colspan=\"2\">3.12</td><td colspan=\"3\">critical review computer expert</td><td>4.06</td><td>4.46</td></tr><tr><td>birth rate big cheese</td><td>4.60</td><td/><td colspan=\"2\">0.36</td><td colspan=\"3\">dead end contact lenses</td><td>1.32</td><td>3.64</td></tr><tr><td>black cherry big picture</td><td>3.11</td><td/><td colspan=\"2\">1.48</td><td colspan=\"3\">dirty money copy cat</td><td>2.21</td><td>0.74</td></tr><tr><td>bow tie big wig</td><td>4.25</td><td/><td colspan=\"2\">0.60</td><td colspan=\"3\">dirty word crime rate</td><td>2.48</td><td>4.39</td></tr><tr><td>brain teaser biological clock</td><td>2.65</td><td/><td colspan=\"2\">2.42</td><td colspan=\"3\">disc jockey damp squib</td><td>1.25</td><td>0.95</td></tr><tr><td>busy bee black box</td><td>0.88</td><td/><td colspan=\"2\">1.29</td><td colspan=\"3\">divine service dark horse</td><td>3.11</td><td>0.65</td></tr><tr><td>carpet bombing black operation</td><td>1.24</td><td/><td colspan=\"2\">1.39</td><td colspan=\"2\">dry land</td><td>day shift</td><td>3.95</td><td>4.54</td></tr><tr><td>cellular phone blind alley</td><td>3.78</td><td/><td colspan=\"2\">1.14</td><td colspan=\"2\">dry wall</td><td>3.33 disability insurance</td><td>4.45</td></tr><tr><td>close call blood bath</td><td>1.59</td><td/><td colspan=\"2\">1.38</td><td colspan=\"3\">dust storm double cross</td><td>3.85</td><td>1.14</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |