ACL-OCL / Base_JSON /prefixD /json /D18 /D18-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:50:12.659019Z"
},
"title": "Improving Cross-Lingual Word Embeddings by Meeting in the Middle",
"authors": [
{
"first": "Yerai",
"middle": [],
"last": "Doval",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Escola Superior de Enxe\u00f1ar\u00eda Inform\u00e1tica Universidade de Vigo",
"location": {
"country": "Spain"
}
},
"email": "yerai.doval@uvigo.es"
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "UK"
}
},
"email": "camachocolladosj@cardiff.ac.uk"
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa-Anke",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "UK"
}
},
"email": "espinosa-ankel@cardiff.ac.uk"
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cardiff University",
"location": {
"country": "UK"
}
},
"email": "schockaerts1@cardiff.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Cross-lingual word embeddings are becoming increasingly important in multilingual NLP. Recently, it has been shown that these embeddings can be effectively learned by aligning two disjoint monolingual vector spaces through linear transformations, using no more than a small bilingual dictionary as supervision. In this work, we propose to apply an additional transformation after the initial alignment step, which moves cross-lingual synonyms towards a middle point between them. By applying this transformation our aim is to obtain a better cross-lingual integration of the vector spaces. In addition, and perhaps surprisingly, the monolingual spaces also improve by this transformation. This is in contrast to the original alignment, which is typically learned such that the structure of the monolingual spaces is preserved. Our experiments confirm that the resulting cross-lingual embeddings outperform state-of-the-art models in both monolingual and cross-lingual evaluation tasks.",
"pdf_parse": {
"paper_id": "D18-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "Cross-lingual word embeddings are becoming increasingly important in multilingual NLP. Recently, it has been shown that these embeddings can be effectively learned by aligning two disjoint monolingual vector spaces through linear transformations, using no more than a small bilingual dictionary as supervision. In this work, we propose to apply an additional transformation after the initial alignment step, which moves cross-lingual synonyms towards a middle point between them. By applying this transformation our aim is to obtain a better cross-lingual integration of the vector spaces. In addition, and perhaps surprisingly, the monolingual spaces also improve by this transformation. This is in contrast to the original alignment, which is typically learned such that the structure of the monolingual spaces is preserved. Our experiments confirm that the resulting cross-lingual embeddings outperform state-of-the-art models in both monolingual and cross-lingual evaluation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embeddings are one of the most widely used resources in NLP, as they have proven to be of enormous importance for modeling linguistic phenomena in both supervised and unsupervised settings. In particular, the representation of words in cross-lingual vector spaces (henceforth, crosslingual word embeddings) is quickly gaining in popularity. One of the main reasons is that they play a crucial role in transferring knowledge from one language to another, specifically in downstream tasks such as information retrieval (Vuli\u0107 and Moens, 2015b) , entity linking (Tsai and Roth, 2016) and text classification (Mogadala and Rettinger, 2016) , while at the same time providing improvements in multilingual NLP problems such as machine translation (Zou et al., 2013) .",
"cite_spans": [
{
"start": 522,
"end": 546,
"text": "(Vuli\u0107 and Moens, 2015b)",
"ref_id": "BIBREF53"
},
{
"start": 564,
"end": 585,
"text": "(Tsai and Roth, 2016)",
"ref_id": "BIBREF49"
},
{
"start": 610,
"end": 640,
"text": "(Mogadala and Rettinger, 2016)",
"ref_id": "BIBREF34"
},
{
"start": 746,
"end": 764,
"text": "(Zou et al., 2013)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There exist different approaches for obtaining these cross-lingual embeddings. One of the most successful methodological directions, which constitutes the main focus of this paper, attempts to learn bilingual embeddings via a two-step process: first, word embeddings are trained on monolingual corpora and then the resulting monolingual spaces are aligned by taking advantage of bilingual dictionaries (Mikolov et al., 2013b; Faruqui and Dyer, 2014; Xing et al., 2015) .",
"cite_spans": [
{
"start": 402,
"end": 425,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF33"
},
{
"start": 426,
"end": 449,
"text": "Faruqui and Dyer, 2014;",
"ref_id": "BIBREF18"
},
{
"start": 450,
"end": 468,
"text": "Xing et al., 2015)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These alignments are generally modeled as linear transformations, which are constrained such that the structure of the initial monolingual spaces is left unchanged. This can be achieved by imposing an orthogonality constraint on the linear transformation (Xing et al., 2015; Artetxe et al., 2016) . Our hypothesis in this paper is that such approaches can be further improved, as they rely on the assumption that the internal structure of the two monolingual spaces is identical. In reality, however, this structure is influenced by languagespecific phenomena, e.g., the fact that Spanish distinguishes between masculine and feminine nouns (Davis, 2015) as well as the specific biases of the different corpora from which the monolingual spaces were learned. Because of this, monolingual embedding spaces are not isomorphic Kementchedjhieva et al., 2018) . On the other hand, simply dropping the orthogonality constraints leads to overfitting, and is thus not effective in practice.",
"cite_spans": [
{
"start": 255,
"end": 274,
"text": "(Xing et al., 2015;",
"ref_id": "BIBREF55"
},
{
"start": 275,
"end": 296,
"text": "Artetxe et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 640,
"end": 653,
"text": "(Davis, 2015)",
"ref_id": "BIBREF15"
},
{
"start": 823,
"end": 853,
"text": "Kementchedjhieva et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The solution we propose is to start with existing state-of-the-art alignment models (Artetxe et al., 2017; Conneau et al., 2018) , and to apply a further transformation to the resulting initial alignment. For each word w with translation w , this additional transformation aims to map the vector representations of both w and w onto their average, thereby creating a cross-lingual vector space which intuitively corresponds to the average of the two aligned monolingual vector spaces. Similar to the initial alignment, this mapping is learned from a small bilingual lexicon.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "(Artetxe et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 107,
"end": 128,
"text": "Conneau et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experimental results show that the proposed additional transformation does not only benefit cross-lingual evaluation tasks, but, perhaps surprisingly, also monolingual ones. In particular, we perform an extensive set of experiments on standard benchmarks for bilingual dictionary induction and monolingual and cross-lingual word similarity, as well as on an extrinsic task: cross-lingual hypernym discovery.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Code and pre-trained embeddings to reproduce our experiments and to apply our model to any given cross-lingual embeddings are available at https://github.com/yeraidm/meemi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bilingual word embeddings have been extensively studied in the literature in recent years. Their nature varies with respect to the supervision signals used for training (Upadhyay et al., 2016; . Some common signals to learn bilingual embeddings come from parallel (Hermann and Blunsom, 2014; Luong et al., 2015; Levy et al., 2017) or comparable corpora (Vuli\u0107 and Moens, 2015a; S\u00f8gaard et al., 2015; Vuli\u0107 and Moens, 2016) , or lexical resources such as WordNet, ConceptNet or BabelNet (Speer et al., 2017; Mrksic et al., 2017; Goikoetxea et al., 2018) . However, these sources of supervision may be scarce, limited to certain domains or may not be directly available for certain language pairs. Another branch of research exploits pre-trained monolingual embeddings with weak signals such as bilingual lexicons for learning bilingual embeddings (Mikolov et al., 2013b; Faruqui and Dyer, 2014; Ammar et al., 2016; Artetxe et al., 2016) . Mikolov et al. (2013b) was one of the first attempts into this line of research, applying a linear transformation in order to map the embeddings from one monolingual space into another. They also noted that more sophisticated approaches, such as using multilayer perceptrons, do not improve with respect to their linear counterparts. Xing et al. (2015) built upon this work by normalizing word embeddings during training and adding an orthogonality constraint. In a complementary direction, Faruqui and Dyer (2014) put forward a technique based on canonical correlation analysis to obtain linear mappings for both monolin-gual embedding spaces into a new shared space. Artetxe et al. (2016) proposed a similar linear mapping to Mikolov et al. (2013b) , generalizing it and providing theoretical justifications which also served to reinterpret the methods of Faruqui and Dyer (2014) and Xing et al. (2015) . Smith et al. (2017) further showed how orthogonality was required to improve the consistency of bilingual mappings, making them more robust to noise. Finally, a more complete generalization providing further insights on the linear transformations used in all these models can be found in Artetxe et al. (2018a) .",
"cite_spans": [
{
"start": 169,
"end": 192,
"text": "(Upadhyay et al., 2016;",
"ref_id": "BIBREF50"
},
{
"start": 264,
"end": 291,
"text": "(Hermann and Blunsom, 2014;",
"ref_id": null
},
{
"start": 292,
"end": 311,
"text": "Luong et al., 2015;",
"ref_id": "BIBREF31"
},
{
"start": 312,
"end": 330,
"text": "Levy et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 353,
"end": 377,
"text": "(Vuli\u0107 and Moens, 2015a;",
"ref_id": "BIBREF52"
},
{
"start": 378,
"end": 399,
"text": "S\u00f8gaard et al., 2015;",
"ref_id": "BIBREF46"
},
{
"start": 400,
"end": 422,
"text": "Vuli\u0107 and Moens, 2016)",
"ref_id": "BIBREF54"
},
{
"start": 486,
"end": 506,
"text": "(Speer et al., 2017;",
"ref_id": "BIBREF48"
},
{
"start": 507,
"end": 527,
"text": "Mrksic et al., 2017;",
"ref_id": "BIBREF35"
},
{
"start": 528,
"end": 552,
"text": "Goikoetxea et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 846,
"end": 869,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF33"
},
{
"start": 870,
"end": 893,
"text": "Faruqui and Dyer, 2014;",
"ref_id": "BIBREF18"
},
{
"start": 894,
"end": 913,
"text": "Ammar et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 914,
"end": 935,
"text": "Artetxe et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 938,
"end": 960,
"text": "Mikolov et al. (2013b)",
"ref_id": "BIBREF33"
},
{
"start": 1272,
"end": 1290,
"text": "Xing et al. (2015)",
"ref_id": "BIBREF55"
},
{
"start": 1429,
"end": 1452,
"text": "Faruqui and Dyer (2014)",
"ref_id": "BIBREF18"
},
{
"start": 1607,
"end": 1628,
"text": "Artetxe et al. (2016)",
"ref_id": "BIBREF1"
},
{
"start": 1666,
"end": 1688,
"text": "Mikolov et al. (2013b)",
"ref_id": "BIBREF33"
},
{
"start": 1796,
"end": 1819,
"text": "Faruqui and Dyer (2014)",
"ref_id": "BIBREF18"
},
{
"start": 1824,
"end": 1842,
"text": "Xing et al. (2015)",
"ref_id": "BIBREF55"
},
{
"start": 2133,
"end": 2155,
"text": "Artetxe et al. (2018a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "These approaches generally require large bilingual lexicons to effectively learn multilingual embeddings (Artetxe et al., 2017) . Recently, however, alternatives which only need very small dictionaries, or even none at all, have been proposed to learn high-quality embeddings via linear mappings (Artetxe et al., 2017; Conneau et al., 2018) . More details on the specifics of these two approaches can be found in Section 3.1. These models have in turn paved the way for the development of machine translation systems which do not require any parallel corpora (Artetxe et al., 2018b; . Moreover, the fact that such approaches only need monolingual embeddings, instead of parallel or comparable corpora, makes them easily adaptable to different domains (e.g., social media or web corpora).",
"cite_spans": [
{
"start": 105,
"end": 127,
"text": "(Artetxe et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 296,
"end": 318,
"text": "(Artetxe et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 319,
"end": 340,
"text": "Conneau et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 559,
"end": 582,
"text": "(Artetxe et al., 2018b;",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper we build upon these state-of-theart approaches by applying an additional transformation, which aims to map each word and its translation onto the average of their vector representations. This strategy bears some resemblance with the idea of learning meta-embeddings (Yin and Sch\u00fctze, 2016) . Meta-embeddings are vector space representations which aggregate several pretrained word embeddings from a given language (e.g., trained using different corpora and/or different word embedding models). Empirically it was found that such meta-embeddings can often outperform the individual word embeddings from which they were obtained. In particular, it was recently argued that word vector averaging can be a highly effective approach for learning such metaembeddings (Coates and Bollegala, 2018) . The main difference between such approaches and our work is that because we rely on a small dictionary, we cannot simply average word vectors, since for most words we do not know the corresponding translation. Instead, we train a regression model to predict this average word vector from the vector representation of the given word only, i.e., without using the vector representation of its translation.",
"cite_spans": [
{
"start": 280,
"end": 303,
"text": "(Yin and Sch\u00fctze, 2016)",
"ref_id": "BIBREF57"
},
{
"start": 775,
"end": 803,
"text": "(Coates and Bollegala, 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our approach for improving cross-lingual embeddings consists of three main steps, where the first two steps are the same as in existing methods. In particular, given two monolingual corpora, a word vector space is first learned independently for each language. This can be achieved with common word embedding models, e.g., Word2vec (Mikolov et al., 2013a) , GloVe (Pennington et al., 2014) or FastText (Bojanowski et al., 2017) . Second, a linear alignment strategy is used to map the monolingual embeddings to a common bilingual vector space (Section 3.1). Third, a final transformation is applied on the aligned embeddings so the word vectors from both languages are refined and further integrated with each other (Section 3.2). This third step is the main contribution of our paper.",
"cite_spans": [
{
"start": 332,
"end": 355,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF32"
},
{
"start": 364,
"end": 389,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 402,
"end": 427,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Once the monolingual word embeddings have been obtained, a linear transformation is applied in order to integrate them into the same vector space. This linear transformation is generally carried out using a supervision signal, typically in the form of a bilingual dictionary. In the following we explain two state-of-the-art models performing this linear transformation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aligning monolingual spaces",
"sec_num": "3.1"
},
{
"text": "VecMap (Artetxe et al., 2017) . VecMap uses an orthogonal transformation over normalized word embeddings. An iterative two-step procedure is also implemented in order to avoid the need of starting with a large seed dictionary (e.g., in the original paper it was tested with a very small bilingual dictionary of just 25 pairs). In this procedure, first, the linear mapping is estimated using a small bilingual dictionary, and then, this dictionary is augmented by applying the learned transformation to new words from the source language. Lastly, the process is repeated until some convergence criterion is met.",
"cite_spans": [
{
"start": 7,
"end": 29,
"text": "(Artetxe et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Aligning monolingual spaces",
"sec_num": "3.1"
},
{
"text": "MUSE (Conneau et al., 2018) . In this case, the transformation matrix is learned through an iter-ative Procrustes alignment (Sch\u00f6nemann, 1966) . 1 The anchor points needed for this alignment can be obtained either through a supplied bilingual dictionary or through an unsupervised model. This unsupervised model is trained using adversarial learning to obtain an initial alignment of the two monolingual spaces, which is then refined by the Procrustes alignment using the most frequent words as anchor points. A new distance metric for the embedding space, referred to as crossdomain similarity local scaling, is also introduced. This metric, which takes into account the nearest neighbors of both source and target words, was shown to better handle high-density regions of the space, thus alleviating the hubness problem of word embedding models (Radovanovi\u0107 et al., 2010; Dinu et al., 2015) , which arises when a few points (known as hubs) become the nearest neighbors of many other points in the embedding space.",
"cite_spans": [
{
"start": 5,
"end": 27,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 124,
"end": 142,
"text": "(Sch\u00f6nemann, 1966)",
"ref_id": "BIBREF42"
},
{
"start": 145,
"end": 146,
"text": "1",
"ref_id": null
},
{
"start": 847,
"end": 873,
"text": "(Radovanovi\u0107 et al., 2010;",
"ref_id": "BIBREF38"
},
{
"start": 874,
"end": 892,
"text": "Dinu et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Aligning monolingual spaces",
"sec_num": "3.1"
},
{
"text": "After the initial alignment of the monolingual word embeddings, our proposed method leverages an additional linear model to refine the resulting bilingual word embeddings. This is because the methods presented in the previous section apply constraints to ensure that the structure of the monolingual embeddings is largely preserved. As already mentioned in the introduction, conceptually this may not be optimal, as embeddings for different languages and trained from different corpora can be expected to be structured somewhat differently. Empirically, as we will see in the evaluation, after applying methods such as VecMap and MUSE there still tend to be significant gaps between the vector representations of words and their translations. Our method directly attempts to reduce these gaps by moving each word vector towards the middle point between its current representation and the representation of its translation. In this way, by bringing the two monolingual fragments of the space closer to each other, we can expect to see an improved performance on cross-lingual evaluation tasks such as bilingual dictionary induction. Importantly, the internal structure of the two monolingual fragments themselves is also affected by this step. By aver-aging between the representations obtained from different languages, we hypothesize that the impact of language-specific phenomena and corpus specific biases will be reduced, thereby ending up with more \"neutral\" monolingual embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meeting in the middle",
"sec_num": "3.2"
},
{
"text": "In the following, we detail our methodological approach. First, we leverage the same bilingual dictionary that was used to obtain the initial alignment (Section 3.1). Specifically, let D = {(w, w )} be the given bilingual dictionary, where w \u2208 V and w \u2208 V , with V and V representing the vocabulary of the first and second language, respectively. For pairs (w, w ) \u2208 D, we can simply compute the corresponding average vector \u00b5 w,w =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meeting in the middle",
"sec_num": "3.2"
},
{
"text": "vw+ v w 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meeting in the middle",
"sec_num": "3.2"
},
{
"text": ". Then, using the pairs in D as training data, we learn a linear mapping X such that X v w \u2248 \u00b5 w,w for all (w, w ) \u2208 D. This mapping X can then be used to predict the averages for words outside the given dictionary. To find the mapping X, we solve the following least squares linear regression problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meeting in the middle",
"sec_num": "3.2"
},
{
"text": "E = (w,w )\u2208D X w \u2212 \u00b5 w,w 2 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meeting in the middle",
"sec_num": "3.2"
},
{
"text": "Similarly, for the other language, we separately learn a mapping X such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meeting in the middle",
"sec_num": "3.2"
},
{
"text": "X v w \u2248 \u00b5 w,w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meeting in the middle",
"sec_num": "3.2"
},
{
"text": "It is worth pointing out that we experimented with several variants of this linear regression formulation. For example, we also tried using a multilayer perceptron to learn non-linear mappings, and we experimented with several regularization terms to penalize mappings that deviate too much from the identity mapping. None of these variants, however, were found to improve on the much simpler formulation in (1), which can be solved exactly and efficiently. Furthermore, one may wonder whether the initial alignment is actually needed, since e.g., Coates and Bollegala (2018) obtained high-quality meta-embeddings without such an alignment set. However, when applying our approach directly to the initial monolingual non-aligned embedding spaces, we obtained results which were competitive but slightly below the two considered alignment strategies.",
"cite_spans": [
{
"start": 548,
"end": 575,
"text": "Coates and Bollegala (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Meeting in the middle",
"sec_num": "3.2"
},
{
"text": "We test our bilingual embedding refinement approach on both intrinsic and extrinsic tasks. In Section 4.1 we describe the common training setup for all experiments and language pairs. The languages we considered are English, Spanish, Italian, German and Finnish. Throughout all the experiments we use publicly available resources in order to make comparisons and reproducibility of our experiments easier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Corpora. In our experiments we make use of web-extracted corpora. For English we use the 3B-word UMBC WebBase Corpus (Han et al., 2013) , while we chose the Spanish Billion Words Corpus (Cardellino, 2016) for Spanish. For Italian and German, we use the itWaC and sdeWaC corpora from the WaCky project (Baroni et al., 2009) , containing 2 and 0.8 billion words, respectively. 2 Lastly, for Finnish, we use the Common Crawl monolingual corpus from the Machine Translation of News Shared Task 2016 3 , composed of 2.8B words. All corpora are tokenized and lowercased. Monolingual embeddings. The monolingual word embeddings are trained with the Skipgram model from FastText (Bojanowski et al., 2017) on the corpora described above. The dimensionality of the vectors was set to 300, with the default Fast-Text hyperparameters. Bilingual dictionaries. We use the bilingual dictionaries packaged together by Artetxe et al. (2017) , each one conformed by 5000 word translations. They are used both for the initial bilingual mappings and then again for our linear transformation. Initial mapping. Following previous works, for the purpose of obtaining the initial alignment, English is considered as source language and the remaining languages are used as target. We make use of the open-source implementations of VecMap 4 (Artetxe et al., 2017) and MUSE 5 (Conneau et al., 2018), which constitute strong baselines for our experiments (cf. Section 3.1). Both of them were used with the recommended parameters and in their supervised setting, using the aforementioned bilingual dictionaries. Meeting in the Middle. Then, once the initial cross-lingual embeddings are trained, and as explained in Section 3.2, we obtain our linear transformation by using the exact solution to the least Model EN-ES EN-IT EN-DE EN-FI P @1 P @5 P @10 P @1 P @5 P @10 P @1 P @5 P @10 P @1 P @5 P @10 squares linear regression problem. To this end, we use the same bilingual dictionaries as in the previous step. Henceforth, we will refer to our transformed models as VecMap \u00b5 and MUSE \u00b5 , depending on the initial mapping.",
"cite_spans": [
{
"start": 117,
"end": 135,
"text": "(Han et al., 2013)",
"ref_id": "BIBREF22"
},
{
"start": 186,
"end": 204,
"text": "(Cardellino, 2016)",
"ref_id": "BIBREF12"
},
{
"start": 301,
"end": 322,
"text": "(Baroni et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 671,
"end": 696,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 902,
"end": 923,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual embeddings training",
"sec_num": "4.1"
},
{
"text": "We test our cross-lingual word embeddings in two intrinsic tasks, i.e., bilingual dictionary induction (Section 4.2.1) and word similarity (Section 4.2.2), and an extrinsic task, i.e., cross-lingual hypernym discovery (Section 4.2.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.2"
},
{
"text": "The dictionary induction task consists in automatically generating a bilingual dictionary from a source to a target language, using as input a list of words in the source language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual dictionary induction",
"sec_num": "4.2.1"
},
{
"text": "Experimental setting For this task, and following previous works, we use the English-Italian test set released by Dinu et al. (2015) and those released by Artetxe et al. (2017) for the remaining language pairs. These test sets have no overlap with respect to the training and development sets, and contain around 1900 entries each. Given an input word from the source language, word translations are retrieved through a nearest-neighbor search of words in the target language, using cosine distance. Note that this gives us a ranked list of candidates for each word from the source language. Accordingly, the performance of the embeddings is evaluated with the precision at k (P @k) metric, which evaluates for what percentage of test pairs, the correct answer is among the k highest ranked candidates.",
"cite_spans": [
{
"start": 114,
"end": 132,
"text": "Dinu et al. (2015)",
"ref_id": "BIBREF16"
},
{
"start": 155,
"end": 176,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual dictionary induction",
"sec_num": "4.2.1"
},
{
"text": "Results As can be seen in Table 1 , our refinement method consistently improves over the baselines (i.e., VecMap and MUSE) on all language pairs and metrics. The higher scores indicate that the two monolingual embedding spaces become more tightly integrated because of our additional transformation. It is worth highlighting here the case of English-Finnish, where the gains obtained in P @5 and P @10 are considerable. This might indicate that our approach is especially useful for morphologically richer languages such as Finnish, where the limitations of the previous bilingual mappings are most apparent.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bilingual dictionary induction",
"sec_num": "4.2.1"
},
{
"text": "Analysis When analyzing the source of errors in P @1, we came to similar conclusions as Artetxe et al. (2017). 6 Several source words are translated to words that are closely related to the one in the gold reference in the target language; e.g., for the English word essentially we obtain b\u00e1sicamente (basically) instead of fundamentalmente (fundamentally) in Spanish, both of them closely related, or the closest neighbor for dirt being mugre (dirt) instead of suciedad (dirt), which in fact was among the five closest neighbors. We can also find multiple examples of the higher performance of our models compared to the baselines. For instance, in the English-Spanish cross-lingual models, after the initial alignment, we can find that seconds has minutos (minutes) as nearest neighbour, but after applying our additional transformation, seconds becomes closest to segundos (seconds). Similarly, paint initially has tintado (tinted) as the closest Spanish word, and then pintura (paint).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual dictionary induction",
"sec_num": "4.2.1"
},
{
"text": "We perform experiments on both monolingual and cross-lingual word similarity. In monolingual similarity, models are tested in their ability to determine the similarity between two words in the same language, whereas in cross-lingual similarity the words belong to different languages. While in the monolingual setting the main objective is to test the quality of the monolingual subsets of the bilin- gual vector space, the cross-lingual setting constitutes a straightforward benchmark to test the quality of bilingual embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word similarity",
"sec_num": "4.2.2"
},
{
"text": "Experimental setting For monolingual word similarity we use the English SimLex-999 (Hill et al., 2015) , and the language-specific versions of SemEval-17 7 (Camacho-Collados et al., 2017), WordSim-353 8 (Finkelstein et al., 2002) , and RG-65 (Rubenstein and Goodenough, 1965) . The corresponding cross-lingual datasets from SemEval-18, WordSim-353 and RG-65 were considered for the cross-lingual word similarity evaluation 9 . Cosine similarity is again used as comparison measure.",
"cite_spans": [
{
"start": 83,
"end": 102,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 203,
"end": 229,
"text": "(Finkelstein et al., 2002)",
"ref_id": "BIBREF19"
},
{
"start": 242,
"end": 275,
"text": "(Rubenstein and Goodenough, 1965)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word similarity",
"sec_num": "4.2.2"
},
{
"text": "Results Tables 2 and 3 show the monolingual 10 and cross-lingual word similarity results 11 , respectively. For both the monolingual and cross-lingual settings, we can notice that our models generally outperform the corresponding baselines. Moreover, in cases where no improvement is obtained, the differences tend to be minimal, with the exception of RG-65, but this is a very small test set for which larger variations can thus be expected. In contrast, there are a few cases where substantial gains were obtained by using our model. This is most notable for English WordSim and SimLex in the monolingual setting. 7 The original datasets of SemEval-17 contained also multiwords, but for consistency we use the version containing single words only.",
"cite_spans": [
{
"start": 616,
"end": 617,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 8,
"end": 22,
"text": "Tables 2 and 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Word similarity",
"sec_num": "4.2.2"
},
{
"text": "8 WordSim datasets consist of the similarity re-scoring for several languages of Leviant and Reichart (2015) , downloaded from http://leviants.com/ira.leviant/ MultilingualVSMdata.html 9 The WordSim-353 and RG-65 cross-lingual datasets (Camacho-Collados et al., 2015) were downloaded at http: //lcl.uniroma1.it/similarity-datasets/",
"cite_spans": [
{
"start": 81,
"end": 108,
"text": "Leviant and Reichart (2015)",
"ref_id": "BIBREF29"
},
{
"start": 236,
"end": 267,
"text": "(Camacho-Collados et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word similarity",
"sec_num": "4.2.2"
},
{
"text": "10 The English results correspond to the averaged performance of the English fragments of English-Spanish, English-Italian and English-German cross-lingual embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word similarity",
"sec_num": "4.2.2"
},
{
"text": "11 The results of the original VecMap in cross-lingual similarity are comparable or better to those reported in Artetxe et al. (2017) on the three datasets used in their evaluation. Analysis In order to further understand the movements of the space with respect to the original VecMap and MUSE spaces, Figure 1 displays the average similarity values on the Se-mEval cross-lingual datasets (the largest among all benchmarks) of each model. As expected, the figure clearly shows how our model consistently brings the words from both languages closer on all language pairs. Furthermore, this movement is performed smoothly across all pairs, i.e., our model does not make large changes to specific words but rather small changes overall. This can be verified by inspecting the standard deviation of the difference in similarity after applying our transformation. These standard deviation scores range from 0.031 (English-Spanish for VecMap) to 0.039 (English-Italian for MUSE), which are relatively small given that the cosine similarity scale ranges from -1 to 1.",
"cite_spans": [
{
"start": 112,
"end": 133,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 302,
"end": 310,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Word similarity",
"sec_num": "4.2.2"
},
{
"text": "As a complement of this analysis we show some qualitative results which give us further insights on the transformations of the vector space after our average approximation. In particular, we analyze the reasons behind the higher quality displayed by our bilingual embeddings in monolingual settings. While VecMap and MUSE do not transform the initial monolingual spaces, our model transforms both spaces simultaneously. In this analysis we focus on the source language of our experiments (i.e., English). We found interesting patterns which are learned by our model and help understand these monolingual gains. For example, a recurring pattern is that words in English which are translated to the same word, or to semantically close words, in the target language end up closer together after our transformation. For example, in the case of English-Spanish the following pairs were among the pairs whose similarity increased the most by applying our transformation: cellphone-telephone, moviefilm, book-manuscript or rhythm-cadence, which are either translated to the same word in Spanish (i.e., tel\u00e9fono and pel\u00edcula in the first two cases) or are already very close in the Spanish space. More generally, we found that word pairs which move together the most tend to be semantically very similar and belong to the same domain, e.g., car-bicycle, opera-cinema, or snow-ice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word similarity",
"sec_num": "4.2.2"
},
{
"text": "Modeling hypernymy is a crucial task in NLP, with direct applications in diverse areas such as semantic search (Hoffart et al., 2014; Roller and Erk, 2016) , question answering (Prager et al., 2008; Yahya et al., 2013) or textual entailment (Geffet and Dagan, 2005) . Hypernyms, in addition, are the backbone of lexical ontologies (Yu et al., 2015) , which are in turn useful for organizing, navigating and retrieving online content (Bordea et al., 2016) . Thus, we propose to evaluate the contribution of cross-lingual embeddings towards the task of hypernym discovery, i.e., given an input word (e.g., cat), retrieve or discover its most likely (set of) valid hypernyms (e.g., animal, mammal, feline, and so on). Intuitively, by leveraging a bilingual vector space condensing the semantics of two languages, one of them being English, the need for large amounts of training data in the target language may be reduced.",
"cite_spans": [
{
"start": 111,
"end": 133,
"text": "(Hoffart et al., 2014;",
"ref_id": "BIBREF26"
},
{
"start": 134,
"end": 155,
"text": "Roller and Erk, 2016)",
"ref_id": "BIBREF39"
},
{
"start": 177,
"end": 198,
"text": "(Prager et al., 2008;",
"ref_id": "BIBREF37"
},
{
"start": 199,
"end": 218,
"text": "Yahya et al., 2013)",
"ref_id": "BIBREF56"
},
{
"start": 241,
"end": 265,
"text": "(Geffet and Dagan, 2005)",
"ref_id": "BIBREF20"
},
{
"start": 331,
"end": 348,
"text": "(Yu et al., 2015)",
"ref_id": "BIBREF58"
},
{
"start": 433,
"end": 454,
"text": "(Bordea et al., 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual hypernym discovery",
"sec_num": "4.2.3"
},
{
"text": "Experimental setting We follow Espinosa-Anke et al. (2016) and learn a (cross-lingual) linear transformation matrix between the hyponym and hypernym spaces, which is afterwards used to predict the most likely (set of) hypernyms, given an unseen hyponym. Training and evaluation data come from the SemEval 2018 Shared Task on Hypernym Discovery (Camacho-Collados et al., 2018) . Note that current state-of-the-art systems aimed at modeling hypernymy (Shwartz et al., 2016; Bernier-Colborne and Barriere, 2018) combine large amounts of annotated data along with language-specific rules and cue phrases such as Hearst Patterns (Hearst, 1992) , both of which are generally scarcely (if at all) available for languages other than English. Therefore, we report experiments with training data only from English (11,779 hyponym-hypernym pairs), and \"enriched\" models informed with relatively few training pairs (500, 1k and 2k) from the target languages. Evaluation is conducted with the same metrics as in the original SemEval task, i.e., Mean Reciprocal Rank (MRR), Mean Average Precision (MAP) and Precision at 5 (P@5). These measures explain a model's behavior from complementary prisms, namely how often at least one valid hypernym was highly ranked (MRR), and in cases where there is more than one correct hypernym, to what extent they were all correctly retrieved (MAP and P@5). Finally, as in the previous experiments, we report comparative results between our proposed models and the two competing baselines (VecMap and MUSE). As an additional informative baseline, we include the highest scoring unsupervised system at the SemEval task for both Spanish and Italian (BestUns), which is based on the distributional models described in Shwartz et al. (2017) .",
"cite_spans": [
{
"start": 31,
"end": 58,
"text": "Espinosa-Anke et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 344,
"end": 375,
"text": "(Camacho-Collados et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 449,
"end": 471,
"text": "(Shwartz et al., 2016;",
"ref_id": "BIBREF43"
},
{
"start": 472,
"end": 508,
"text": "Bernier-Colborne and Barriere, 2018)",
"ref_id": "BIBREF6"
},
{
"start": 624,
"end": 638,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF23"
},
{
"start": 1735,
"end": 1756,
"text": "Shwartz et al. (2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual hypernym discovery",
"sec_num": "4.2.3"
},
{
"text": "Results The results listed in Table 4 model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations. In Italian our proposed model shows an improvement across all configurations. However, in Spanish VecMap emerges as a highly competitive baseline, with our model only showing an improved performance when training data in this language abounds (in this specific case there is an increase from 17.2 to 19.5 points in the MRR metric). This suggests that the fact that the monolingual spaces are closer in our model is clearly beneficial when hybrid training data is given as input, opening up avenues for future work on weakly-supervised learning. Concerning the other baseline, MUSE, the contribution of our proposed model is consistent for both languages, again becoming more apparent in the Italian split and in a fully cross-lingual setting, where the improvement in MRR is almost 3 points (from 10.6 to 13.3). Finally, it is noteworthy that even in the setting where no training data from the target language is leveraged, all the systems based on cross-lingual embeddings outperform the best unsupervised baseline, which is a very encouraging result with regards to solving tasks for languages on which training data is not easily accessible or not directly available.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Cross-lingual hypernym discovery",
"sec_num": "4.2.3"
},
{
"text": "Analysis A manual exploration of the results obtained in cross-lingual hypernym discovery reveals a systematic pattern when comparing, for ex-ample, VecMap and our model. It was shown in Table 4 that the performance of our model gradually increased alongside the size of the training data in the target language until surpassing VecMap in the most informed configuration (i.e., EN+2k). Specifically, our model seems to show a higher presence of generic words in the output hypernyms, which may be explained by these being closer in the space. In fact, out of 1000 candidate hyponyms, our model correctly finds person 143 times, as compared to the 111 of VecMap, and this systematically occurs with generic types such as citizen or transport. Let us mention, however, that the considered baselines perform remarkably well in some cases. For example, the English-only VecMap configuration (EN), unlike ours, correctly discovered the following hypernyms for Francesc Maci\u00e0 (a Spanish politician and soldier): politician, ruler, leader and person. These were missing from the prediction of our model in all configurations until the most informed one (EN+2k).",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 194,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Cross-lingual hypernym discovery",
"sec_num": "4.2.3"
},
{
"text": "We have shown how to refine bilingual word embeddings by applying a simple transformation which moves cross-lingual synonyms closer towards their average representation. Before applying this strategy, we start by aligning the monolingual embeddings of the two languages of interest. For this initial alignment, we have considered two state-of-the-art methods from the literature, namely VecMap (Artetxe et al., 2017) and MUSE (Conneau et al., 2018) , which also served as our baselines. Our approach is motivated by the fact that these alignment methods do not change the structure of the individual monolingual spaces. However, the internal structure of embeddings is, at least to some extent, language-specific, and is moreover affected by biases of the corpus from which they are trained, meaning that after the initial alignment significant gaps remain between the representations of cross-lingual synonyms. We tested our approach on a wide array of datasets from different tasks (i.e., bilingual dictionary induction, word similarity and cross-lingual hypernym discovery) with state-of-the-art results. This paper opens up several promising avenues for future work. First, even though both languages are currently being treated symmetrically, the initial monolingual embedding of one of the languages may be more reliable than that of the other. In such cases, it may be of interest to replace the vectors \u00b5 w,w by a weighted average of the monolingual word vectors. Second, while we have only considered bilingual scenarios in this paper, our approach can naturally be applied to scenarios involving more languages. In this case, we would first choose a single target language, and obtain alignments between all the other languages and this target language. To apply our model, we can then simply learn mappings to predict averaged word vectors across all languages. Finally, it would also be interesting to use the obtained embeddings in downstream applications such as language identification or crosslingual sentiment analysis, and extend our analysis to other languages, with a particular focus on morphologically-rich languages (after seeing our success with Finnish), for which the bilingual induction task has proved more challenging for standard cross-lingual embedding models .",
"cite_spans": [
{
"start": 394,
"end": 416,
"text": "(Artetxe et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 426,
"end": 448,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "Very recently,Kementchedjhieva et al. (2018) showed that projecting both monolingual embedding spaces onto a third space (instead of directly onto each other) using a generalized Procrustes analysis facilitates the learning of alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "UMBC, Spanish Billion-Words and ItWaC are the official corpora of the hypernym discovery SemEval task (Section 4.2.3) for English, Spanish and Italian, respectively.3 http://www.statmt.org/wmt16/ translation-task.html 4 github.com/artetxem/vecmap 5 github.com/facebookresearch/MUSE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The results on this task are lower than those reported inArtetxe et al. (2017). This is due to the different corpora and embedding algorithms used to train the monolingual embeddings. In particular, they use corpora including Wikipedia, which is comparable across languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that this task is harder than hypernymy detection(Upadhyay et al., 2018). Hypernymy detection is framed as a binary classification task, while in hypernym discovery hypernyms have to be retrieved from the whole vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Yerai Doval is supported by the Spanish Ministry of Economy, Industry and Competitiveness (MINECO) through projects FFI2014-51978-C2-2-R, TIN201785160C21-R and TIN201785160C22-R; the Spanish State Secretariat for Research, Development and Innovation (which belongs to MINECO) and the European Social Fund (ESF) under an FPI fellowship (BES-2015-073768) associated to project FFI2014-51978-C2-1-R; and by the Galician Regional Government under project ED431D 2017/12. Jose Camacho-Collados, Luis Espinosa-Anke and Steven Schockaert have been supported by ERC Starting Grant 637277.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Massively multilingual word embeddings",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.01925"
]
},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2289--2294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2289-2294.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intel- ligence (AAAI-18).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised neural machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Sixth International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural ma- chine translation. In Proceedings of the Sixth Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Bernardini",
"suffix": ""
},
{
"first": "Adriano",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "Eros",
"middle": [],
"last": "Zanchetta",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "43",
"issue": "",
"pages": "209--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation, 43(3):209-226.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Crim at semeval-2018 task 9: A hybrid approach to hypernym discovery",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Bernier-Colborne",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Barriere",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Bernier-Colborne and Caroline Barriere. 2018. Crim at semeval-2018 task 9: A hybrid approach to hypernym discovery. In Proceedings of The 12th In- ternational Workshop on Semantic Evaluation, New Orleans, Louisiana.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "5",
"issue": "1",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion of Computational Linguistics, 5(1):135-146.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semeval-2016 task 13: Taxonomy extraction evaluation (texeval-2)",
"authors": [
{
"first": "Georgeta",
"middle": [],
"last": "Bordea",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Buitelaar",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1081--1091",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgeta Bordea, Els Lefever, and Paul Buitelaar. 2016. Semeval-2016 task 13: Taxonomy extrac- tion evaluation (texeval-2). In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 1081-1091.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Vered Shwartz, Roberto Navigli, and Horacio Saggion",
"authors": [
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Claudio",
"middle": [
"Delli"
],
"last": "Bovi",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Oramas",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Pasini",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose Camacho-Collados, Claudio Delli Bovi, Luis Espinosa-Anke, Sergio Oramas, Tommaso Pasini, Enrico Santus, Vered Shwartz, Roberto Navigli, and Horacio Saggion. 2018. SemEval-2018 Task 9: Hy- pernym Discovery. In Proceedings of SemEval, New Orleans, LA, United States.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semeval-2017 task 2: Multilingual and cross-lingual semantic word similarity",
"authors": [
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Taher"
],
"last": "Pilehvar",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "15--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. Semeval- 2017 task 2: Multilingual and cross-lingual semantic word similarity. In Proceedings of the 11th Interna- tional Workshop on Semantic Evaluation (SemEval- 2017), pages 15-26.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Framework for the Construction of Monolingual and Cross-lingual Word Similarity Datasets",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing -Short Papers",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A Framework for the Construction of Monolingual and Cross-lingual Word Similarity Datasets. In Proceedings of the 53rd Annual Meeting of the Association for Com- putational Linguistics and the 7th International Joint Conference on Natural Language Processing -Short Papers, pages 1-7, Beijing, China.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Spanish Billion Words Corpus and Embeddings",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Cardellino",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Cardellino. 2016. Spanish Bil- lion Words Corpus and Embeddings.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Frustratingly easy meta-embedding-computing metaembeddings by averaging source word embeddings",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Coates",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "194--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Coates and Danushka Bollegala. 2018. Frus- tratingly easy meta-embedding-computing meta- embeddings by averaging source word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 194-198.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In Proceed- ings of ICLR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Does the world look different in different languages?",
"authors": [
{
"first": "Ernest",
"middle": [],
"last": "Davis",
"suffix": ""
}
],
"year": 2015,
"venue": "Artif. Intell",
"volume": "229",
"issue": "",
"pages": "202--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ernest Davis. 2015. Does the world look different in different languages? Artif. Intell., 229:202-209.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improving zero-shot learning by mitigating the hubness problem",
"authors": [
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR, Workshop track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgiana Dinu, Angeliki Lazaridou, and Marco Ba- roni. 2015. Improving zero-shot learning by mitigat- ing the hubness problem. In Proceedings of ICLR, Workshop track.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Supervised distributional hypernym discovery via domain adaptation",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Espinosa-Anke",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Claudio",
"middle": [
"Delli"
],
"last": "Bovi",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "424--435",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Espinosa-Anke, Jose Camacho-Collados, Claudio Delli Bovi, and Horacio Saggion. 2016. Supervised distributional hypernym discovery via domain adap- tation. In Proceedings of EMNLP, pages 424-435.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving vector space word representations using multilingual correlation",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "462--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 462-471.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Gabrilovich",
"middle": [],
"last": "Evgeniy",
"suffix": ""
},
{
"first": "Matias",
"middle": [],
"last": "Yossi",
"suffix": ""
},
{
"first": "Rivlin",
"middle": [],
"last": "Ehud",
"suffix": ""
},
{
"first": "Solan",
"middle": [],
"last": "Zach",
"suffix": ""
},
{
"first": "Wolfman",
"middle": [],
"last": "Gadi",
"suffix": ""
},
{
"first": "Ruppin",
"middle": [],
"last": "Eytan",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "1",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Gabrilovich Evgeniy, Matias Yossi, Rivlin Ehud, Solan Zach, Wolfman Gadi, and Rup- pin Eytan. 2002. Placing search in context: The concept revisited. ACM Transactions on Informa- tion Systems, 20(1):116-131.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The distributional inclusion hypotheses and lexical entailment",
"authors": [
{
"first": "Maayan",
"middle": [],
"last": "Geffet",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "107--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maayan Geffet and Ido Dagan. 2005. The distribu- tional inclusion hypotheses and lexical entailment. In Proceedings of the 43rd Annual Meeting on Asso- ciation for Computational Linguistics, pages 107- 114. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bilingual embeddings with random walks over multilingual wordnets. Knowledge-Based Systems",
"authors": [
{
"first": "Josu",
"middle": [],
"last": "Goikoetxea",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josu Goikoetxea, Aitor Soroa, and Eneko Agirre. 2018. Bilingual embeddings with random walks over mul- tilingual wordnets. Knowledge-Based Systems.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "UMBC EBIQUITY-CORE: Semantic textual similarity systems",
"authors": [
{
"first": "Lushan",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Abhay",
"middle": [],
"last": "Kashyap",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Second Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "44--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lushan Han, Abhay Kashyap, Tim Finin, James Mayfield, and Jonathan Weese. 2013. UMBC EBIQUITY-CORE: Semantic textual similarity sys- tems. In Proceedings of the Second Joint Con- ference on Lexical and Computational Semantics, pages 44-52.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of COL-ING 1992",
"volume": "",
"issue": "",
"pages": "539--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proc. of COL- ING 1992, pages 539-545.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multilingual models for compositional distributed semantics",
"authors": [],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "58--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014. Multi- lingual models for compositional distributed seman- tics. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), volume 1, pages 58-68.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Stics: searching with strings, things, and cats",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Hoffart",
"suffix": ""
},
{
"first": "Dragan",
"middle": [],
"last": "Milchevski",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval",
"volume": "",
"issue": "",
"pages": "1247--1248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Hoffart, Dragan Milchevski, and Gerhard Weikum. 2014. Stics: searching with strings, things, and cats. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval, pages 1247-1248. ACM.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Generalizing procrustes analysis for better bilingual dictionary induction",
"authors": [
{
"first": "Yova",
"middle": [],
"last": "Kementchedjhieva",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yova Kementchedjhieva, Sebastian Ruder, Ryan Cot- terell, and Anders S\u00f8gaard. 2018. Generalizing pro- crustes analysis for better bilingual dictionary induc- tion. In Proceedings of the Conference on Compu- tational Natural Language Learning.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Sixth International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In Proceedings of the Sixth International Conference on Learning Representations.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Separated by an un-common language: Towards judgment language informed vector space modeling",
"authors": [
{
"first": "Ira",
"middle": [],
"last": "Leviant",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.00106"
]
},
"num": null,
"urls": [],
"raw_text": "Ira Leviant and Roi Reichart. 2015. Separated by an un-common language: Towards judgment language informed vector space modeling. arXiv preprint arXiv:1508.00106.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A strong baseline for learning cross-lingual word embeddings from sentence alignments",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "765--774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Anders S\u00f8gaard, and Yoav Goldberg. 2017. A strong baseline for learning cross-lingual word embeddings from sentence alignments. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, volume 1, pages 765-774.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Bilingual word representations with monolingual quality in mind",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "151--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D Man- ning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Bilingual word embeddings from parallel and nonparallel corpora for cross-language text classification",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Mogadala",
"suffix": ""
},
{
"first": "Achim",
"middle": [],
"last": "Rettinger",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "692--702",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Mogadala and Achim Rettinger. 2016. Bilin- gual word embeddings from parallel and non- parallel corpora for cross-language text classifica- tion. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 692-702.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Semantic Specialisation of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrksic",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Ira",
"middle": [],
"last": "Leviant",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Gai",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrksic, Ivan Vuli\u0107, Diarmuid\u00d3 S\u00e9aghdha, Ira Leviant, Roi Reichart, Milica Gai, Anna Korhonen, and Steve Young. 2017. Semantic Specialisation of Distributional Word Vector Spaces using Monolin- gual and Cross-Lingual Constraints. TACL.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP, pages 1532-1543.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Question answering by predictive annotation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Prager",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Chu-Carroll",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"W"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Krzysztof",
"middle": [],
"last": "Czuba",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in Open Domain Question Answering",
"volume": "",
"issue": "",
"pages": "307--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Prager, Jennifer Chu-Carroll, Eric W Brown, and Krzysztof Czuba. 2008. Question answering by pre- dictive annotation. In Advances in Open Domain Question Answering, pages 307-347. Springer.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Hubs in space: Popular nearest neighbors in high-dimensional data",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Radovanovi\u0107",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Nanopoulos",
"suffix": ""
},
{
"first": "Mirjana",
"middle": [],
"last": "Ivanovi\u0107",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "2487--2531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milo\u0161 Radovanovi\u0107, Alexandros Nanopoulos, and Mir- jana Ivanovi\u0107. 2010. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Ma- chine Learning Research, 11(Sep):2487-2531.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Relations such as hypernymy: Identifying and exploiting hearst patterns in distributional vectors for lexical entailment",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "2163--2172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller and Katrin Erk. 2016. Relations such as hypernymy: Identifying and exploiting hearst patterns in distributional vectors for lexical entail- ment. In Proceedings of EMNLP, pages 2163-2172, Austin, Texas.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Contextual correlates of synonymy",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Rubenstein",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Goodenough",
"suffix": ""
}
],
"year": 1965,
"venue": "Communications of the ACM",
"volume": "8",
"issue": "10",
"pages": "627--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM, 8(10):627-633.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A survey of cross-lingual word embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2018. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A generalized solution of the orthogonal procrustes problem",
"authors": [
{
"first": "H",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sch\u00f6nemann",
"suffix": ""
}
],
"year": 1966,
"venue": "Psychometrika",
"volume": "31",
"issue": "1",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter H Sch\u00f6nemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1-10.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Improving hypernymy detection with an integrated path-based and distributional method",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceed- ings of ACL, Berlin, Germany.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vered Shwartz, Enrico Santus, and Dominik Schlechtweg. 2017. Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection. In Proceedings of EACL, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax",
"authors": [
{
"first": "L",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "H",
"middle": [
"P"
],
"last": "David",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Turban",
"suffix": ""
},
{
"first": "Nils",
"middle": [
"Y"
],
"last": "Hamblin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Inverted indexing for cross-lingual NLP",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez Alonso",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johannsen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1713--1722",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard,\u017deljko Agi\u0107, H\u00e9ctor Mart\u00ednez Alonso, Barbara Plank, Bernd Bohnet, and Anders Jo- hannsen. 2015. Inverted indexing for cross-lingual NLP. In Proceedings of ACL, pages 1713-1722.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "On the limitations of unsupervised bilingual dictionary induction",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.03620"
]
},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli\u0107. 2018. On the limitations of unsupervised bilingual dictionary induction. arXiv preprint arXiv:1805.03620.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of AAAI.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Cross-lingual wikification using multilingual embeddings",
"authors": [
{
"first": "Chen-Tse",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "589--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen-Tse Tsai and Dan Roth. 2016. Cross-lingual wik- ification using multilingual embeddings. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 589-598.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Cross-lingual models of word embeddings: An empirical comparison",
"authors": [
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1661--1670",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word em- beddings: An empirical comparison. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 1661-1670.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Robust cross-lingual hypernymy detection using dependency context",
"authors": [
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shyam Upadhyay, Yogarshi Vyas, Marine Carpuat, and Dan Roth. 2018. Robust cross-lingual hypernymy detection using dependency context. In Proceedings of NAACL.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Bilingual word embeddings from non-parallel documentaligned data applied to bilingual lexicon induction",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "719--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2015a. Bilin- gual word embeddings from non-parallel document- aligned data applied to bilingual lexicon induction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 2: Short Papers), vol- ume 2, pages 719-725.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "363--372",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2015b. Mono- lingual and cross-lingual information retrieval mod- els based on (bilingual) word embeddings. In Pro- ceedings of the 38th international ACM SIGIR con- ference on research and development in information retrieval, pages 363-372. ACM.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Bilingual distributed word representations from documentaligned comparable data",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Artificial Intelligence Research",
"volume": "55",
"issue": "",
"pages": "953--994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2016. Bilingual distributed word representations from document- aligned comparable data. Journal of Artificial In- telligence Research, 55:953-994.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Normalized word embedding and orthogonal transform for bilingual word translation",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiye",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1006--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal trans- form for bilingual word translation. In Proceed- ings of the 2015 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006-1011.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Robust question answering over the web of linked data",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Yahya",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Berberich",
"suffix": ""
},
{
"first": "Shady",
"middle": [],
"last": "Elbassuoni",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd ACM international conference on Conference on information & knowledge management",
"volume": "",
"issue": "",
"pages": "1107--1116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, and Gerhard Weikum. 2013. Robust question an- swering over the web of linked data. In Proceedings of the 22nd ACM international conference on Con- ference on information & knowledge management, pages 1107-1116. ACM.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Learning word meta-embeddings",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2016. Learning word meta-embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Learning term embeddings for hypernymy identification",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haixun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xuemin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "1390--1397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Yu, Haixun Wang, Xuemin Lin, and Min Wang. 2015. Learning term embeddings for hypernymy identification. In Proceedings of IJCAI, pages 1390-1397.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Bilingual word embeddings for phrase-based machine translation",
"authors": [
{
"first": "Will",
"middle": [
"Y"
],
"last": "Zou",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Cer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1393--1398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Y. Zou, Richard Socher, Daniel M. Cer, and Christopher D. Manning. 2013. Bilingual word em- beddings for phrase-based machine translation. In Proceedings of EMNLP, pages 1393-1398, Seattle, USA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Comparative average similarity between VecMap and MUSE (blue) and our proposed model (red) on the SemEval cross-lingual similarity datasets.",
"uris": null
},
"TABREF0": {
"text": "Table 1: Bilingual dictionary induction results. Precision at k (P @K) performance for Spanish (ES), Italian (IT), German (DE) and Finnish (FI), using English (EN) as source language.",
"type_str": "table",
"content": "<table><tr><td>VecMap</td><td>36.0 59.8</td><td>65.6 35.5 57.2</td><td>63.9 31.7 54.2</td><td>60.2 17.2 36.4</td><td>43.7</td></tr><tr><td>VecMap \u00b5</td><td>37.8 61.5</td><td>67.1 36.3 59.2</td><td>66.3 33.5 57.3</td><td>61.7 18.5 40.9</td><td>48.3</td></tr><tr><td>MUSE</td><td>37.1 59.0</td><td>65.2 36.3 57.3</td><td>62.9 32.5 53.7</td><td>59.0 18.2 35.2</td><td>42.4</td></tr><tr><td>MUSE \u00b5</td><td>38.3 62.3</td><td>67.2 37.0 59.0</td><td>65.7 33.7 57.0</td><td>62.2 19.4 41.1</td><td>49.0</td></tr></table>",
"html": null,
"num": null
},
"TABREF1": {
"text": "VecMap 74.1 73.9 67.9 67.0 42.0 40.7 77.8 77.5 70.0 71.4 86.6 88.0 67.2 69.0 64.0 66.9 70.1 70.1 72.7 72.2 80.2 79.7VecMap \u00b5 75.0 74.8 70.5 70.1 43.8 41.8 78.0 76.6 71.5 72.1 87.6 89.4 68.4 68.9 65.3 67.3 70.9 70.7 72.7 72.4 81.0 81.3MUSE74.2 74.2 68.3 67.6 42.6 41.5 78.6 78.4 70.5 71.9 86.6 88.3 67.4 69.2 64.1 66.9 69.8 69.8 72.5 72.5 80.3 80.1",
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"2\">English</td><td/><td/><td/><td/><td colspan=\"2\">Spanish</td><td/><td/><td colspan=\"2\">Italian</td><td/><td/><td/><td colspan=\"2\">German</td><td/></tr><tr><td>Model</td><td colspan=\"22\">SemEval WordSim SimLex RG-65 SemEval RG-65 SemEval WordSim SemEval WordSim RG-65</td></tr><tr><td/><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td></tr><tr><td>MUSE \u00b5</td><td colspan=\"22\">75.0 74.8 70.8 70.4 44.2 42.4 78.3 77.5 71.8 72.3 87.7 89.3 68.6 69.1 65.3 67.2 70.4 70.2 72.2 72.1 80.3 80.0</td></tr></table>",
"html": null,
"num": null
},
"TABREF2": {
"text": "Monolingual word similarity results. Pearson (r) and Spearman (\u03c1) correlation.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF3": {
"text": "VecMap 71.7 71.6 82.1 82.4 69.6 69.6 60.2 63.1 71.6 71.3 64.1 65.9 78.1 78.8 VecMap \u00b5 71.7 71.3 82.1 82.8 70.2 69.9 61.3 63.0 72.0 71.5 64.2 65.4 78.6 79.7 MUSE 72.0 72.0 81.9 82.3 69.4 69.4 59.9 62.7 70.4 70.1 63.5 65.1 78.4 79.5",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"8\">English-Spanish English-Italian</td><td/><td colspan=\"4\">English-German</td></tr><tr><td>Model</td><td colspan=\"2\">SemEval</td><td colspan=\"2\">RG-65</td><td colspan=\"10\">SemEval WordSim SemEval WordSim RG-65</td></tr><tr><td/><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td></tr><tr><td>MUSE \u00b5</td><td colspan=\"14\">72.2 71.8 82.3 82.5 70.5 70.1 61.2 62.7 71.9 71.4 64.1 65.3 78.8 80.5</td></tr></table>",
"html": null,
"num": null
},
"TABREF4": {
"text": "Cross-lingual word similarity results. Pearson (r) and Spearman (\u03c1) correlation.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF6": {
"text": "Results on the hypernym discovery task.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}