ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:53:04.676507Z"
},
"title": "Hierarchical Mapping for Crosslingual Word Embedding Alignment",
"authors": [
{
"first": "Ion",
"middle": [],
"last": "Madrazo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Boise State University",
"location": {}
},
"email": "ionmadrazo@boisestate.edu"
},
{
"first": "Maria",
"middle": [
"Soledad"
],
"last": "Pera",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Boise State University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The alignment of word embedding spaces in different languages into a common crosslingual space has recently been in vogue. Strategies that do so compute pairwise alignments and then map multiple languages to a single pivot language (most often English). These strategies, however, are biased towards the choice of the pivot language, given that language proximity and the linguistic characteristics of the target language can strongly impact the resultant crosslingual space in detriment of topologically distant languages. We present a strategy that eliminates the need for a pivot language by learning the mappings across languages in a hierarchical way. Experiments demonstrate that our strategy significantly improves vocabulary induction scores in all existing benchmarks, as well as in a new non-English-centered benchmark we built, which we make publicly available.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The alignment of word embedding spaces in different languages into a common crosslingual space has recently been in vogue. Strategies that do so compute pairwise alignments and then map multiple languages to a single pivot language (most often English). These strategies, however, are biased towards the choice of the pivot language, given that language proximity and the linguistic characteristics of the target language can strongly impact the resultant crosslingual space in detriment of topologically distant languages. We present a strategy that eliminates the need for a pivot language by learning the mappings across languages in a hierarchical way. Experiments demonstrate that our strategy significantly improves vocabulary induction scores in all existing benchmarks, as well as in a new non-English-centered benchmark we built, which we make publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embeddings have changed how we build text processing applications, given their capabilities for representing the meaning of words (Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017) . Traditional embedding-generation strategies create different embeddings for the same word depending on the language. Even if the embeddings themselves are different across languages, their distributions tend to be consistentthe relative distances across word embeddings are preserved regardless of the language (Mikolov et al., 2013b) . This behavior has been exploited for crosslingual embedding generation by aligning any two monolingual embeddings spaces into one (Dinu et al., 2014; Xing et al., 2015; Artetxe et al., 2016) .",
"cite_spans": [
{
"start": 135,
"end": 158,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF32"
},
{
"start": 159,
"end": 183,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 184,
"end": 208,
"text": "Bojanowski et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 522,
"end": 545,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF33"
},
{
"start": 678,
"end": 697,
"text": "(Dinu et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 698,
"end": 716,
"text": "Xing et al., 2015;",
"ref_id": "BIBREF44"
},
{
"start": 717,
"end": 738,
"text": "Artetxe et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Alignment techniques have been successful in generating bilingual embedding spaces that can later be merged into a crosslingual space using a pivoting language, English being the most common choice. Unfortunately, mapping one language into another suffers from a neutrality problem, as the resultant bilingual space is impacted by language-specific phenomena and corpus-specific biases of the target language (Doval et al., 2018) . To address this issue, Doval et al. (2018) propose mapping any two languages into a different middle space. This mapping, however, precludes the use of a pivot language for merging multiple bilingual spaces into a crosslingual one, limiting the solution to a bilingual scenario. Additionally, the pivoting strategy suffers from a generalized bias problem, as languages that are the most similar to the pivot obtain a better alignment and are therefore better represented in the crosslingual space. This is because language proximity is a key factor when learning alignments. This is evidenced by the results in Artetxe et al. (2017) , which indicate that when using English (Indo-European) as a pivot, the vocabulary induction results for Finnish (Uralic) are about 10 points below the rest of the Indo-European languages under study.",
"cite_spans": [
{
"start": 409,
"end": 429,
"text": "(Doval et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 455,
"end": 474,
"text": "Doval et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 1043,
"end": 1064,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "If we want to incorporate all languages into the same crosslingual space regardless of their characteristics, we need to go beyond the trainbilingual/merge-by-pivoting (TB/MP) model, and instead seek solutions that can directly generate crosslingual spaces without requiring a bilingual step. This motivates the design of HCEG (Hierarchical Crosslingual Embedding Generation), the hierarchical pivotless approach for generating crosslingual embedding spaces that we present in this paper. HCEG addresses both the language proximity and target-space bias problems by learning a compositional mapping across multiple languages in a hierarchical fashion. This is accomplished by taking advantage of a language family tree for aggregating multiple languages into a single crosslingual space. What distinguishes HCEG from TB/MP strategies is that it does not need to include the pivot language in all mapping functions. This enables the option to learn mappings between typologically similar languages, known to yield better quality mappings (Artetxe et al., 2017) . The main contributions of our work include:",
"cite_spans": [
{
"start": 1037,
"end": 1059,
"text": "(Artetxe et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A strategy 1 that leverages a language family tree for learning mapping matrices that are composed hierarchically to yield crosslingual embedding spaces for language families.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 An analysis of the benefits of hierarchically generating mappings across multiple languages compared to traditional unsupervised and supervised TB/MP alignment strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent interest in crosslingual word embedding generation has led to manifold strategies that can be classified into four groups (Ruder et al., 2017) : (1) Mapping techniques that rely on a bilingual lexicon for mapping an already trained monolingual space into another (Mikolov et al., 2013b; Artetxe et al., 2017; Doval et al., 2018) ;",
"cite_spans": [
{
"start": 129,
"end": 149,
"text": "(Ruder et al., 2017)",
"ref_id": "BIBREF35"
},
{
"start": 270,
"end": 293,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF33"
},
{
"start": 294,
"end": 315,
"text": "Artetxe et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 316,
"end": 335,
"text": "Doval et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "(2) Pseudo-crosslingual techniques that generate synthetic crosslingual corpora that are then used in a traditional monolingual strategy, by randomly replacing words of a text with their translations (Gouws and S\u00f8gaard, 2015; Duong et al., 2016) or by combining texts in various languages into one (Vuli\u0107 and Moens, 2016) ; (3) Approaches that only optimize for a crosslingual objective function, which require parallel corpora in the form of aligned sentences (Hermann and Blunsom, 2013; Lauly et al., 2014) or texts ; and (4) Approaches using a joint objective function that optimizes both mono-and crosslingual loss, that rely on a parallel corpora aligned at the word (Zou et al., 2013; Luong et al., 2015) or sentence level Coulmance et al., 2015) . A key factor for crosslingual embedding generation techniques is the amount of supervised signal needed. Parallel corpora are a scarce resource-even nonexistent for some isolated or low-resource languages. Thus, we focus on mapping-based strategies that can go from requiring just a bilingual lexicon (Mikolov et al., 2013b) to absolutely no supervised signal (Artetxe et al., 2018) . This aligns with one of the premises for our research to enable the generation of a single crosslingual embedding space for as many languages as possible. Mikolov et al. (2013b) first introduced a mapping strategy for aligning two monolingual spaces that learns a linear transformation from source to target space using stochastic gradient descent. This approach was later enhanced with the use of least squares for finding the optimal solution, L2-normalizing the word embedding, or constraining the mapping matrix to be orthogonal (Dinu et al., 2014; Shigeto et al., 2015; Xing et al., 2015; Artetxe et al., 2016; Smith et al., 2017) ; enhancements that soon became standard in the area. These models, however, are affected by hubness, where some words tend to be in the neighborhood of an exceptionally large number of other words, causing problems when using nearest-neighbor as the retrieval algorithm, and neutrality, where the resultant crosslingual space is highly conditioned by the characteristics of the language used as target. Hubness was addressed by a correction applied to nearest-neighbor retrieval whether using a inverted softmax (Smith et al., 2017) or a cross-domain similarity local scaling later incorporated as part of the training loss . Neutrality was noticed by Doval et al. (2018) , for which they proposed using two independent linear transformations so that the resulting crosslingual space is in a middle point between the two languages rather than just on the target language, and therefore not biased towards either language.",
"cite_spans": [
{
"start": 200,
"end": 225,
"text": "(Gouws and S\u00f8gaard, 2015;",
"ref_id": "BIBREF19"
},
{
"start": 226,
"end": 245,
"text": "Duong et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 298,
"end": 321,
"text": "(Vuli\u0107 and Moens, 2016)",
"ref_id": "BIBREF42"
},
{
"start": 461,
"end": 488,
"text": "(Hermann and Blunsom, 2013;",
"ref_id": null
},
{
"start": 489,
"end": 508,
"text": "Lauly et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 672,
"end": 690,
"text": "(Zou et al., 2013;",
"ref_id": "BIBREF45"
},
{
"start": 691,
"end": 710,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 729,
"end": 752,
"text": "Coulmance et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 1056,
"end": 1079,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF33"
},
{
"start": 1115,
"end": 1137,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 1295,
"end": 1317,
"text": "Mikolov et al. (2013b)",
"ref_id": "BIBREF33"
},
{
"start": 1673,
"end": 1692,
"text": "(Dinu et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 1693,
"end": 1714,
"text": "Shigeto et al., 2015;",
"ref_id": "BIBREF37"
},
{
"start": 1715,
"end": 1733,
"text": "Xing et al., 2015;",
"ref_id": "BIBREF44"
},
{
"start": 1734,
"end": 1755,
"text": "Artetxe et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 1756,
"end": 1775,
"text": "Smith et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 2289,
"end": 2309,
"text": "(Smith et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 2429,
"end": 2448,
"text": "Doval et al. (2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other important trends in the area concentrate on (i) the search of unsupervised techniques for learning mapping functions Artetxe et al., 2018) and their versatility in dealing with low-resource languages ; (ii) the long-tail problem, where most existing crosslingual embedding generation strategies tend to under-perform (Braune et al., 2018; Czarnowska et al., 2019) ; and (iii) the formulation of more robust evaluation procedures oriented to determining the quality of generated crosslingual spaces (Glavas et al., 2019; .",
"cite_spans": [
{
"start": 123,
"end": 144,
"text": "Artetxe et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 323,
"end": 344,
"text": "(Braune et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 345,
"end": 369,
"text": "Czarnowska et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 504,
"end": 525,
"text": "(Glavas et al., 2019;",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most existing works focus on a bilingual scenario. Yet, there is an increase on the interest for designing strategies that directly consider more than two languages at training time, thus creating fully multilingual spaces that do not depend on the TB/MP model (Kementchedjhieva et al., 2018) for multilingual inference. Attempts to do so include the efforts by , who leverage an inverted index based on the Wikipedia multilingual links to generate multilingual word representations. Wada et al. (2019) instead use a sentence-level neural language model for directly learning multilingual word embeddings and as a result bypassing the need for mapping functions. In the paradigm of aligning pre-trained word embeddings where we focus, Heyman et al. (2019) propose a technique that iteratively builds a multilingual space starting from a monolingual space and incrementally incorporating languages to it. Even if this strategy deviates from the traditional TB/MP model, it still preserves the idea of having a pivot language. Chen and Cardie (2018) separate the mapping functions into encoders and decoders, which are not language-pair dependent, unlike those in the TB/MP model. This removes the need for a pivot language, given that the multilingual space is now latent among all encoder and decoders and not centered in a specific language. The same pivot-removal effect is achieved by the strategy introduced in Jawanpuria et al. 2019, which generalizes a bilingual word embedding strategy into a multilingual counterpart by inducing a Mahalanobis similarity metric in the common space. These two strategies, however, still consider all languages equidistant to each other, ignoring the similarities and differences that lay among them.",
"cite_spans": [
{
"start": 261,
"end": 292,
"text": "(Kementchedjhieva et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 484,
"end": 502,
"text": "Wada et al. (2019)",
"ref_id": "BIBREF43"
},
{
"start": 735,
"end": 755,
"text": "Heyman et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 1025,
"end": 1047,
"text": "Chen and Cardie (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work is inspired by Doval et al. (2018) and Chen and Cardie (2018) , in the sense that it focuses on obtaining a non-biased or neutral crosslingual space that does not need to be centered in English (or any other pivot language) as the primary source. This neutrality is obtained by a compositional mapping strategy that hierarchically combines mapping functions in order to generate a single, non-language-centered crosslingual space, enabling a better mapping for languages that are distant or non-typologically related to English.",
"cite_spans": [
{
"start": 24,
"end": 43,
"text": "Doval et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 48,
"end": 70,
"text": "Chen and Cardie (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A language family tree is a natural categorization of languages that has historically been used by linguistics as a reference that encodes similarities and differences across languages (Comrie, 1989) . For example, based on the relative distances among languages in the tree illustrated in Figure 1 , we infer that both Spanish and Portuguese are relatively similar to each other, given that they are part of the same Italic family. At the same time, both languages are farther apart from English than each other, and are radically different with respect to Finnish.",
"cite_spans": [
{
"start": 185,
"end": 199,
"text": "(Comrie, 1989)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 290,
"end": 298,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Proposed Strategy",
"sec_num": "3"
},
{
"text": "A language family tree offers a natural organization that can be exploited when building crosslingual spaces that integrate typologically diverse languages. We leverage this structure in HCEG, in order to generate a hierarchically compositional crosslingual word embedding space. Unlike traditional TB/MP strategies that generate a single crosslingual space, the result of HCEG is a set of transformation matrices that can be used to hierarchically compose the space required in each use-case. This maximizes the typological intra-similarity among languages used for generating the embedding space, while minimizing the differences across languages that can hinder the quality of the crosslingual embedding space. Thus, if an external application only considers languages that are Germanic, then it can just use the Germanic crosslingual space generated by HCEG, whereas if it needs languages beyond Germanic it can utilize a higher level family, such as the Indo-European. This cannot be done with the traditional TB/MP model. In this case, if an application is, for example, using only Uralic languages, then it would be forced to use an English-centered crosslingual space; this would in a decrease in the quality of the crosslingual space used because of the potential bad quality of mappings between typologically different languages, such as Uralic and Indo-European languages (Artetxe et al., 2017) .",
"cite_spans": [
{
"start": 1383,
"end": 1405,
"text": "(Artetxe et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Strategy",
"sec_num": "3"
},
{
"text": "Let L = {l 1 , . . . , l |L| } be a set of languages considered, F = {f 1 , . . . , f |F | } a set of language families, and S = L \u222a F = {s 1 , . . . , s |F |+|L| } a set of possible language spaces. Let X l \u2208 R V l \u00d7d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "be the set of word embeddings in language l, where V l is the vocabulary of l and d is the number of dimensions of each embedding. Consider T as a language family tree (exemplified in Figure 1 ). The nodes in T represent language spaces in S, while each edge represents a transformation between the two nodes attached to it-that is, W s a \u2190 \u2212s b \u2208 R d\u00d7d refers to the transformation from as the transformation that results from aggregating all transformations in the path from s b to s a , using the dot product:",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 192,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "W s a * \u2190 \u2212s b = W s a \u2190 \u2212s t 1 W s t 1 \u2190 \u2212s t 2 W s t 2 \u2190 \u2212s b (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "where the path from",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "s a to s b is s a , s t 1 , s t 2 , s b ; s t 1 and s t 2 are intermediate spaces between s a and s b .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "Finally, P is a set of bilingual lexicons, where In the rest of this section we describe HCEG in detail. Values given to each hyperparameter mentioned in this section are defined in Section 4.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "P l 1 ,l 2 \u2208 {0, 1} V l 1 \u00d7V l 2 is a bilingual lexicon with word pairs in languages l 1 and l 2 . P l 1 ,l 2 (i, j) = 1 if the i th word of V l 1 and the j th word of V l 2 are aligned, P l 1 ,l 2 (i, j) = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "When dealing with embeddings generated from different sources and languages, it is important to normalize them. For doing so, HCEG follows a normalization sequence shown to be beneficial (Artetxe et al., 2018) , which consists of length normalization, mean centering, and a second length normalization. The last length normalization allows computing cosine similarity between embeddings in a more efficient manner, simplifying the computation of cosine similarity to a dot product given that the embeddings are of unit-length.",
"cite_spans": [
{
"start": 187,
"end": 209,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Normalization",
"sec_num": "3.2"
},
{
"text": "In order to generate a crosslingual embedding space, HCEG requires a set P of aligned words across different languages. When using HCEG in a supervised way, P can be any existing resource consisting of bilingual lexicons, such as the ones described in Section 4.1. However, best advantage of the proposed strategy is taken when using unsupervised lexicon induction techniques, as they enable generating input lexicons for any pair of languages needed. Unlike TB/MP strategies that can only take advantage of signal that involves the pivot language, HCEG can use signal across all combinations of languages. For example, a TB/MP model where English is the pivot can only use lexicons composed of English words. Instead, HCEG can exploit bilingual lexicons from other languages, such as Spanish-Portuguese or Spanish-Dutch, that if using the language tree in Figure 1 would reinforce the training of",
"cite_spans": [],
"ref_spans": [
{
"start": 857,
"end": 865,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "W s it \u2190 \u2212s es , W s it \u2190 \u2212s pt and W s it \u2190 \u2212s es , W s in \u2190 \u2212s it , W s in \u2190 \u2212s ge , W s ge \u2190 \u2212s du , respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "When using HCEG in unsupervised mode, P needs to be automatically inferred. Yet, computing each P l 1 ,l 2 \u2208 P given two monolingual embedding matrices X l 1 and X l 2 is not a trivial task, as X l 1 and Figure 2 : Distributions of word rankings across languages. The coordinates of each dot (representing a word pair) are determined by the position in the frequency ranking the word pair in each of the languages. Numbers are written in thousands. Scores computed using FastText embedding rankings and MUSE crosslingual pairs ). Pearson's correlation (\u03c1) computed using the full set of word pairs, figures generated using a random sample of 500 word pairs for illustration purposes.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 212,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "X l 2 are not aligned in vocabulary or dimension axes. Artetxe et al. (2018) leverage the fact that the relative distances among words are maintained across languages (Mikolov et al., 2013b) , and thus propose using a language-agnostic representation M l for generating an initial alignment P l 1 ,l 2 :",
"cite_spans": [
{
"start": 55,
"end": 76,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 167,
"end": 190,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M l = sorted(X l X \u22a4 l )",
"eq_num": "(2)"
}
],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "where given that X l is length normalized, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X l X \u22a4 l computes a matrix of dimensions V l \u00d7 V",
"eq_num": "l"
}
],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "containing in each row the cosine similarities of the corresponding word embedding with respect to all other word embeddings. The values in each row are then sorted to generate a distribution representation of each word that in a ideal case where the isometry assumption holds perfectly would be language agnostic. Using the embedding representations M l 1 and M l 2 , P l 1 ,l 2 can be computed by assigning each word its most similar representation as its pair, that is, P l 1 ,l 2 (i, j) = 1 if:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "j = arg max 1\u2264j\u2264V l M l 1 (i, * )M l 2 (j, * ) \u22a4 (3) where M l 1 (i, * ) is the i th row of M l 1 and M l 2 (j, * ) is the j th row of M l 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "The results in Artetxe et al. (2018) indicate that this assumption is strong enough to generate an initial alignment across languages. However, as we demonstrate in Section 3.3, the quality of this type of initial alignment is dependent on the languages used, making this initialization not applicable for languages that are typologically too distant from each other-a statement also echoed by Artetxe et al. (2018) and .",
"cite_spans": [
{
"start": 15,
"end": 36,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 394,
"end": 415,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "To ensure a more robust initialization, we enhance the strategy presented in Artetxe et al. (2018) by introducing a new signal based on the frequency of use of words. Lin et al. (2012) found that the top-2 most frequent words tend to be consistent across different languages. Motivated by this result, we measure to what extent the frequency rankings of words correlates across languages. As shown in Figure 2 , the wordfrequency rankings are strongly correlated across languages, meaning that popular words tend to be popular regardless of the language. We exploit this behavior in order to reduce the search space of Equation (3) as follows:",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 167,
"end": 184,
"text": "Lin et al. (2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 401,
"end": 409,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "j = arg max j\u2212t\u2264j\u2264j+t M l 1 (i, * )M l 2 (j, * ) \u22a4 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "where t is a value used to determine the search window. Note that we assume the embeddings in any matrix X l are sorted in ascending order of frequency, namely, the embedding in the first row represents the most frequent word of language l. Apart from improving the overall quality of the inferred lexicons (see Section 5.1), incorporating a frequency ranking based search as part of the initialization reduces the computation time needed as the search space is considerably reduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Pairs",
"sec_num": "3.3"
},
{
"text": "Unlike traditional objective functions that optimize a transformation matrix for two languages at a time, the goal of HCEG is to simultaneously optimize the set of all transformation matrices W such that the loss function L is minimized:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg min W L",
"eq_num": "(5)"
}
],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "L is a linear combination of three different losses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "L = \u03b2 1 \u00d7 L align + \u03b2 2 \u00d7 L orth + \u03b2 3 \u00d7 L reg (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "where L align , L orth , L reg , represent the alignment, orthogonality, and regularization losses, and \u03b2 1 , \u03b2 2 , \u03b2 3 are their weights. L align gauges the extent to which training word pairs align. This is done by computing the sum of the cosine similarity among all word pairs in P :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L align = \u2212 P l 1 ,l 2 \u2208P P l 1 ,l 2 (W s l 1 ,l 2 * \u2190 \u2212s l1 X l 1 \u2022 W s l 1 ,l 2 * \u2190 \u2212s l2 X l 2 )",
"eq_num": "(7)"
}
],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "where s l 1 ,l 2 refers to the space in the lowest common parent node for s l 1 and s l 2 in T (e.g., s es,en = s in in Figure 1 ). We found that using s l 1 ,l 2 instead of the space in the root node of T improves the overall performance of HCEG, apart from reducing the time taken for training (see Section 5.3). Several researchers have found it beneficial to enforce orthogonality in the transformation matrices W (Xing et al., 2015; Artetxe et al., 2016; Smith et al., 2017) . This constraint ensures that the original quality of the embeddings is not degraded when transforming them to a crosslingual space. For this reason, we incorporate an orthogonality constraint L orth into our loss function in Equation 8, with I being the identity matrix.",
"cite_spans": [
{
"start": 418,
"end": 437,
"text": "(Xing et al., 2015;",
"ref_id": "BIBREF44"
},
{
"start": 438,
"end": 459,
"text": "Artetxe et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 460,
"end": 479,
"text": "Smith et al., 2017)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "L orth = W s 1 \u2190 \u2212s 2 \u2208W I \u2212 W s 1 \u2190 \u2212s 2 W \u22a4 s 1 \u2190 \u2212s 2 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "We also find it beneficial to include a regularization term in L:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L reg = W s 1 \u2190 \u2212s 2 \u2208W W s 1 \u2190 \u2212s 2 2",
"eq_num": "(9)"
}
],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "3.5 Learning the Parameters HCEG utilizes stochastic gradient descent for tuning the parameters in W with respect to the training word pairs in P . In each iteration, L is computed and backtracked in order to tune each transformation matrix in W such that L is minimized. Batching is used to reduce the computational load in each iteration. A batch of word pairsP is sampled from P by randomly selecting \u03b1 lpairs language pairs as well as \u03b1 wpairs word pairs in eachP l 1 ,l 2 \u2208P -for example, a batch might consist of 10P l 1 ,l 2 matrices each containing 500 aligned words. Iterations are grouped into epochs of \u03b1 iter iterations at the end of which L is computed for the whole P . We take a conservative approach as convergence criterion. If no improvement is found in L in the last \u03b1 conv epochs, the training loop stops.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "We achieve best convergence time initializing each W s 1 \u2190 \u2212s 2 \u2208 W to be orthogonal. We tried several methods for orthogonal initialization, such as simply initializing to the identity matrix. However, we obtained most consistent results using the random semi-orthogonal initialization introduced by Saxe et al. (2013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.4"
},
{
"text": "As shown by Artetxe et al. (2017) , the initial lexicon P is iteratively improved by using the generated crosslingual space for inferring a new lexicon P \u2032 at the end of each learning phase described in Section 3.5. More specifically, when computing each",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative Refinement",
"sec_num": "3.6"
},
{
"text": "P \u2032 l 1 ,l 2 \u2208 P \u2032 , P \u2032 l 1 ,l 2 (i, j) is 1 (0 otherwise) if j = arg max j W s l 1 ,l 2 * \u2190 \u2212s l1 X l 1 (i, * )\u2022 (W s l 1 ,l 2 * \u2190 \u2212s l2 X l 2 (j, * )) \u22a4 (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative Refinement",
"sec_num": "3.6"
},
{
"text": "Potentially, any new bilingual lexicon P \u2032 l 1 ,l 2 can be inferred and included in P \u2032 at the end of each learning phase. However, as the cardinality of L grows, this process can take a prohibitive amount of time given combinatorial explosion. Therefore, in practice, we only infer P \u2032 l 1 ,l 2 following a criterion intended to maximize lexicon quality. P \u2032 l 1 ,l 2 is inferred for languages l 1 and l 2 only if l 1 and l 2 are siblings in T (they share the same parent node) or l 1 and l 2 are the best representatives of their corresponding family. A language is deemed the best representative of its family if it is the most frequently-spoken 2 language in its subtree. For example, in Figure 1 , Spanish is the best representative for the Italic family, but not for Indo-European, for which English is used.",
"cite_spans": [],
"ref_spans": [
{
"start": 692,
"end": 700,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Iterative Refinement",
"sec_num": "3.6"
},
{
"text": "The set criterion not only reduces the amount of time required to infer P \u2032 but also improves overall HCEG performance. This is due to a better utilization of the hierarchical characteristics of our crosslingual space, only inferring bilingual lexicons from typologically related languages or their best representatives in terms of resource quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative Refinement",
"sec_num": "3.6"
},
{
"text": "As discussed in Section 2, one of the issues effecting nearest-neighbor retrieval is hubness (Dinu et al., 2014) , where certain words are in the surrounding of an abnormally large number of other words, causing the nearest-neighbor algorithm to incorrectly prioritize hub words. To address this issue, we use Cross-domain Similarity Local Scaling (CSLS) as the retrieval algorithm during both training and prediction time. CSLS is a rectification for nearest-neighbor retrieval that avoids hubness by counterbalancing the cosine similarity between two embeddings by a factor consisting of the average similarity of each embeddings with its k closest neighbors. Following the criteria in Conneau et al. 2017, we set the number of neighbours used by CSLS to k = 10.",
"cite_spans": [
{
"start": 93,
"end": 112,
"text": "(Dinu et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval Criterion",
"sec_num": "3.7"
},
{
"text": "We describe below the evaluation set up used for conducting the experiments presented in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Framework",
"sec_num": "4"
},
{
"text": "Dinu-Artetxe. The Dinu-Artetxe dataset, presented by Dinu et al. (2014) and enhanced by Artetxe et al. (2016) , is the one of the first benchmarks for evaluating crosslingual embeddings. It is composed of English-centered bilingual lexicons for Italian, Spanish, German, and Finnish.",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "Dinu et al. (2014)",
"ref_id": "BIBREF14"
},
{
"start": 88,
"end": 109,
"text": "Artetxe et al. (2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Pair Datasets",
"sec_num": "4.1"
},
{
"text": "The MUSE dataset contains bilingual lexicons for all combinations of German, English, Spanish, French, Italian, and Portuguese. In addition, it includes word pairs for 44 languages with respect to English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MUSE.",
"sec_num": null
},
{
"text": "Panlex. Dinu-Artetxe and MUSE are both English-centered datasets, given that most (if not all) of their word pairs have English as their source or target language. This makes the datasets suboptimal for our purpose of generating and evaluating a non-language centered crosslingual space. For this reason, we generated a dataset using Panlex (Kamholz et al., 2014) , a panlingual lexical database. This dataset (made public in our repository) includes bilingual lexicons for all combinations of 157 languages for which FastText is available, totalling 24,492 bilingual lexicons. Each of the lexicons was generated by randomly sampling 5k words from the top-200k words in the embedding set for the source language, and translating them to the target language using the Panlex database. We find it important to highlight that this dataset contains considerably more noise than other datasets given that Panlex is generated in an automatic way and is not as finely curated by humans as previous datasets. We still find comparisons using this dataset fair, given that its noisy nature should affect all strategies equally.",
"cite_spans": [
{
"start": 341,
"end": 363,
"text": "(Kamholz et al., 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MUSE.",
"sec_num": null
},
{
"text": "As previously stated, we aim to generate a single crosslingual space for as many languages as possible. We started with the 157 languages for which FastText embeddings are available . We then removed languages that did not meet both of the following criteria: (1) there must exist a bilingual lexicon with at least 500 word pairs for the language in any of the datasets described in Section 4.1, and (2) the embedding set provided by FastText must contain at least 20k words. The first criterion is a minimal condition for evaluation, while the second one is necessary for the unsupervised initialization strategy. The criteria are met by 107 languages, which are the ones used in our experiments. Their corresponding ISO-639 codes can be seen later in Table 5 . We use the language family tree defined by Lewis and Gary (2015).",
"cite_spans": [],
"ref_spans": [
{
"start": 753,
"end": 760,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Language Selection and Family Tree",
"sec_num": "4.2"
},
{
"text": "For experimental purposes, each dataset described in Section 4.1 is split into training and testing sets. We use the original train-test splits for Dinu-Artetxe and MUSE. For Panlex, we generate a split randomly sampling word pairs-keeping 80% for the training and the remaining 20% for testing. For development and parameter tuning purposes, we use a disjoint set of word pairs specifically created for this purpose based on the Panlex lexical database. This development set contains 10 different languages with varied popularity. None of the word pairs present in this development set are part of either the train or test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "4.3"
},
{
"text": "The following hyperparameters were manually tuned using the development set described in Section 4.3: \u03b2 1 = 0.98, \u03b2 2 = 0.01, \u03b2 3 = 0.01, Figure 3 : Number of correct word pairs inferred using the unsupervised initialization technique presented by Artetxe et al. (2018) and the Frequency based technique described in Section 3.3.",
"cite_spans": [
{
"start": 248,
"end": 269,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 138,
"end": 146,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": "4.4"
},
{
"text": "We discuss below the results of the study conducted over 107 languages to assess HCEG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "We first evaluate the performance of the unsupervised initialization strategy described in Section 3.3, and compare it with the state-of-theart strategy proposed by Artetxe et al. (2018) . In this case, we run both initialization strategies using the top-20k FastText embeddings for all pairwise combinations of the 107 languages we study. For each language pair, we measure how many of the inferred word pairs are present in the corresponding lexicons in the MUSE and Panlex datasets. For MUSE, our proposed initialization strategy (Frequency based) obtains an average of 48.09 correct pairs, an improvement with respect to the 29.62 obtained by the strategy proposed by Artetxe et al. (2018) . For Panlex, the respective average correct pair counts are 1.05 and 0.55. Both differences are statistically significant (p < 0.01) using a paired t-test. The noticeable difference across datasets is due to how the sampling was done for generating the datasets: MUSE contains a considerably higher number of frequent words in comparison to Panlex, making the latter a relatively harder dataset for vocabulary induction. In Figure 3 we illustrate the results of each strategy grouped by language-pair similarity. This similarity is based on the number of common parents the two languages share. For example, in Figure 1 , Spanish has a similarity of 3, 2, and 1 with Portuguese, English, and Finnish, respectively. As we see in Figure 3 , similarity is a factor that strongly determines the quality of the alignment generated by the unsupervised initialization. Even if this phenomenon affects both analyzed strategies, our proposed frequencybased initialization strategy consistently obtains a few more correct word pairs for the least similar language pairs, which, as we show in Table 4 , are key for generating a correct mapping for those languages.",
"cite_spans": [
{
"start": 165,
"end": 186,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 672,
"end": 693,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 1119,
"end": 1127,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1306,
"end": 1314,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1423,
"end": 1431,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1777,
"end": 1784,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unsupervised Initialization",
"sec_num": "5.1"
},
{
"text": "In order to contextualize the performance of HCEG with respect to the state-of-the-art (listed in Tables 1 and 2), we measure the word translation prediction capabilities of each of the strategies. We do so using Precision@1 for bilingual lexicon induction as a means to quantify vocabulary induction performance. Scores reported hereafter are average Precision@1 in percentage form, for each of the words in the testing set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State-of-the-Art Comparison",
"sec_num": "5.2"
},
{
"text": "When applicable, we report results for both the supervised (HCEG-S) and unsupervised (HCEG-U) versions of HCEG. In the supervised mode, we train one single model per dataset using all the training word pairs available. We then use this model for computing all pairwise scores. In the unsupervised mode, unless explicitly stated otherwise, we train a single model regardless of the dataset used for testing purposes. This means that, in some cases, the unsupervised mode leverages monolingual data beyond the languages used for testing, as it uses all 107 language embeddings. We found it unfair to train a supervised model using Method en-it en-de en-fi en-es Supervised Mikolov (2013b) 34 Table 1 : Results using the Dinu-Artetxe dataset. Scores marked with (*) were reported by Artetxe et al. (2018) ; the remaining ones were reported in the corresponding original papers.",
"cite_spans": [
{
"start": 671,
"end": 686,
"text": "Mikolov (2013b)",
"ref_id": "BIBREF33"
},
{
"start": 780,
"end": 801,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 690,
"end": 697,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "State-of-the-Art Comparison",
"sec_num": "5.2"
},
{
"text": "the Dinu-Artetxe dataset given that it only contains four bilingual lexicons, not enough for training our tree structure. Thus, only unsupervised results are shown for that dataset. As shown in Table 1 , the unsupervised version of HCEG achieves, in most cases, the best performance among all unsupervised strategies, even improving over state-of-the-art supervised models in some cases. The improvement is most noticeable for Italian and Spanish, where HCEG-U obtains an improvement of 1 and 3 points, respectively. A similar behavior can be seen in Table 2 , where we describe the results on the MUSE dataset. Spanish, along with Catalan, Italian, and Portuguese, obtains a substantially larger improvement compared with other languages. We attribute this to the fact that Spanish is the second most resourceful language in terms of corpora after English. This makes the quality of Spanish word embeddings comparably better than other languages, which as a result improves the mapping quality of typologically related languages, such as Portuguese, Italian, or Catalan.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 201,
"text": "Table 1",
"ref_id": null
},
{
"start": 551,
"end": 558,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "State-of-the-Art Comparison",
"sec_num": "5.2"
},
{
"text": "To further contextualize the performance of HCEG-U, in terms of its capability for generating crosslingual embeddings in an unsupervised fashion, we conducted further experiments. In Table 3 , we summarize the results obtained from comparing HCEG-U with other unsupervised strategies focused on learning crosslingual word embeddings. In our comparisons we include (i) a direct bilingual learning baseline that simply learns a bilingual mapping using two monolingual word embeddings , (ii) a pivot-based strategy that can leverage a third language for learning a crosslingual space , and (iii) a fully multilingual, pivotless strategy that aggregates languages into a joint space in an iterative manner (Chen and Artetxe et al. (2018) were obtained using the scripts shared by the authors. All the other scores were reported in . HCEG-U \u2212 only considers the 29 languages in the experiment for training.",
"cite_spans": [
{
"start": 712,
"end": 733,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 183,
"end": 190,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "State-of-the-Art Comparison",
"sec_num": "5.2"
},
{
"text": "Cardie, 2018). From the reported results, we see that HCEG-U \u2212 outperforms all other considered strategies for 24 out of 30 language pairs. Highest improvements are found for languages of the Italic family (Spanish, Portuguese, Italian, and French). We observe that HCEG-U \u2212 under-performed when the corresponding experiment involved the German language as source or target. We attribute this behavior to the fact that the Italic family is predominant in the languages explored in this experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State-of-the-Art Comparison",
"sec_num": "5.2"
},
{
"text": "In order to perform a fair comparison with respect to the work proposed by Chen and Cardie (2018) , we limited the monolingual data that HCEG-U \u2212 used to the six languages considered in this experiment (results that are reported in Table 3 ). However, in order to show the full potential of HCEG-U, we also include results achieved when using 107 languages (column HCEG-U). As seen in Tables 2 and 3, the differences between HCEG-U \u2212 and HCEG-U are considerable, manifesting the capabilities of the proposed model to take advantage of monolingual data in multiple languages at the same time.",
"cite_spans": [
{
"start": 75,
"end": 97,
"text": "Chen and Cardie (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "State-of-the-Art Comparison",
"sec_num": "5.2"
},
{
"text": "The importance of explicitly considering topological connections among languages to Table 3 : Comparison of unsupervised crosslingual embedding learning strategies under different merging scenarios in the MUSE dataset. Direct indicates a traditional bilingual scenario where a mapping from source to target is learned. Pivot uses an auxiliary pivot language (English) for merging multiple languages into the same space. Multi merges all languages into the same space without using a pivot. All scores except HCEG-U were originally reported by Chen and Cardie (2018) . HCEG-U \u2212 only considers the six languages in the experiment for training. Note that HCEG-U is excluded when highlighting the best model (bold), given that it uses monolingual data beyond what other models do.",
"cite_spans": [
{
"start": 543,
"end": 565,
"text": "Chen and Cardie (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "State-of-the-Art Comparison",
"sec_num": "5.2"
},
{
"text": "enhance mappings become more evident when analyzing the data in Table 5 . Here we include the pairing that yielded the best and worst mapping for each language, as well as the position of English in the quality ranking. English and Spanish have a strong quality mapping with respect to each other, Spanish being the language with which English obtains the best mapping and English is the second-best mapped language for Spanish. Additionally, Spanish is the language with which Italian, Portuguese, and Catalan obtain the best mapping quality. On the other side of the spectrum, the worst mappings are dominated by two languages, Georgian and Vietnamese, with 40 languages having these two language as worst; this is followed by Maltese, Albanian, and Finnish, with 8 occurrences each. This is not unexpected, as these languages are relatively isolated in the language family tree, and also have a low number of speakers. We also see that English is usually on the top side of the ranking for most languages. For languages that are completely isolated, such as Basque and Yoruba, English tends to be their best mapped language. From this we surmise that when typological relations are lacking, the quality of the embedding space is the only aspect the mapping strategy can rely on.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "State-of-the-Art Comparison",
"sec_num": "5.2"
},
{
"text": "Given space constraints, we cannot show the vocabulary induction scores for the 24,492 language pairs in the Panlex dataset. Instead, we group the results using two variables: the sum of number of speakers for each of the two languages, and the minimum similarity (as defined in Section 5.1) for each language with respect to English. We rely on these variables for grouping purposes as they align with two of our objectives for designing HCEG: (1) remove the bias towards the pivot language (English), and (2) improve the performance of low-resource languages by taking advantage of typologically similar languages. Figure 4 captures the improvement (2.7 on average) of HCEG-U over the strategy introduced in Artetxe et al. (2018) (the best-performing benchmark), grouped by the aforementioned variables. We excluded Hindi and Chinese from the figure, as they made any pattern hard to observe given their high number of speakers. The sum of number of speakers axis was also logarithmically scaled to facilitate visualization. The figure captures an evident trend in the similarity axis. The lower the similarity of the language with respect to English, the higher the improvement achieved by HCEG-U. This can be attributed to the manner in which TB/MP models generated the space using English as primary resource, hindering the potential quality of languages that are distant from it. Additionally, we see a less-prominent but existing trend in Figure 4 : Improvement over the strategy proposed by Artetxe et al. (2018) in Panlex, in terms of language similarity and number of speakers. Darker denotes larger improvement. the speaker sum axis. Despite some exceptions, HCEG-U obtains higher differences with respect to Artetxe et al. (2018) the less spoken a language is. A behavior that is similar in essence to a Pareto front can also be depicted from the figure. Even if both variables contribute to the difference in improvement of HCEG-U, one variable needs to compensate for the other in order to maximize accuracy. In other words, the improvement is higher the fewer speakers the language pair has or the more distant the two languages are from English, but when both variables go to the extreme, the improvement decreases. The aforementioned trends serve as evidence that the hierarchical structure is indeed important when building a crosslingual space that considers typologically diverse languages, validating our premises for designing HCEG.",
"cite_spans": [
{
"start": 1499,
"end": 1520,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 1720,
"end": 1741,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 617,
"end": 625,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1446,
"end": 1454,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "State-of-the-Art Comparison",
"sec_num": "5.2"
},
{
"text": "In order to assess the validity of each functionality included as part of HCEG, we conducted an ablation study. We summarize the results of this study in Table 4 , where the symbol \u00ac indicates that the subsequent feature is ablated in the model. For example, \u00acHierarchy indicates that the Hierarchy structure is removed, replacing it by a structure where each language needs just one transformation matrix to reach the World languages space. Table 4 : Ablation study.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 4",
"ref_id": null
},
{
"start": 442,
"end": 449,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.3"
},
{
"text": "As indicated by the ablation results, the hierarchical structure is indeed a key part of HCEG, considerably reducing its performance when removed, and having its strongest effect in the dataset with the highest number of languages (i.e., Panlex). The importance of the Iterative Refinement strategy is also noticeable, making the unsupervised version of HCEG useless when removed. The Frequency-based initialization is also a characteristic that considerably improves the results of HCEG-U. Looking deeper into the data, we found 2,198 language pairs (about 9% of all pairs) that obtained a vocabulary induction accuracy close to 0 (<0.05) without using this initialization, but were able to produce enough signal to yield more substantial accuracy values (>10.0) when using the Frequency-based initialization. Finally, the design decisions that we initially took for reducing training time-(i) the orthogonal initialization, (ii) the heuristic based inference, and (iii) using the lowest common root for computing the loss function-also have a positive effect on the performance of the HCEG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.3"
},
{
"text": "One of the premises for building HCEG was to design a strategy that would not require pivots for achieving a single space with multiple word embeddings, given that a pivot induces a bias into the final space that can hinder the quality of the mapping for languages that are too distant to it. In this section we describe the results of experiments conducted for measuring the effect pivot selection can have on the performance of the mapping. For doing so, we measure the Table 6 : Results obtained by existing bilingual mapping strategies using different pivots on the Panlex dataset. Values in each cell indicate the average performance obtained for each of the pairwise combinations of languages under the family noted in the corresponding column title. For example, the first cell indicates the average score obtained for all possible combinations of afro-asiatic languages using English as a pivot. Results are averaged across the strategy presented in and Artetxe et al. (2018) in order to avoid system-specific biases.",
"cite_spans": [
{
"start": 962,
"end": 983,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 472,
"end": 479,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Influence of Pivot Choice",
"sec_num": "5.4"
},
{
"text": "L B,W,E L B,W,E L B,W,E L B,W,E L B,W,E L B,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Influence of Pivot Choice",
"sec_num": "5.4"
},
{
"text": "performance of state-of-the-art bilingual mapping strategies in a pivot-based inference scenario. We use 11 different pivots and average the results of two different strategies- and (Artetxe et al., 2018) -grouped by several language families. As depicted by the results presented in Table 6 , selecting a pivot that belongs to the family of the languages being tested is always the best choice. In cases where we considered multiple pivots of the same family, the most resource-rich language resulted in the best option, namely, Spanish in the case of the Italic family and English for the Germanic family. On average, English is the best choice of pivot if all language families need to be considered, followed by Spanish and Portuguese. This validates two of the design decisions for HCEG, that is, the need to avoid selecting a pivot and the importance of using the languages with largest speaker-base when performing language transfer.",
"cite_spans": [
{
"start": 182,
"end": 204,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Influence of Pivot Choice",
"sec_num": "5.4"
},
{
"text": "We have introduced HCEG, a crosslingual space learning strategy that does not depend on a pivot language, as instead, it takes advantage of the natural hierarchy existing among languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Results from extensive studies on 107 languages demonstrate that the proposed strategy outperforms existing crosslingual space generation techniques, in terms of vocabulary induction, for both popular and not so popular languages. HCEG improves the mapping quality of many low-resource languages. We noticed that this improvement mostly happens when a language has more typologically related counterparts, however. Therefore, as future work, we intend to investigate other techniques that can help improve the quality of mapping for typologically isolated low-resource languages. Additionally, it is important to note that the time complexity required by the proposed algorithm is N (N \u22121), with N being the number of languages considered. For the traditional TB/MP strategy, complexity is limited to learning from N language pairs. Therefore, we plan on exploring strategies to reduce the number of language pairs that need to be learned for creating the crosslingual space. Finally, we will explore different data-driven strategies for building the tree structure, such as geographical proximity or lexical overlap, which could lead to better optimized arrangements of the crosslingual space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Resources can be found at https://github.com/ ionmadrazo/HCEG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Based on numbers reported byLewis and Gary (2015).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "t = 1000, \u03b1 lpairs = 128, \u03b1 wpairs = 2048, \u03b1 iter = 5000, \u03b1 conv = 25.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF4": {
"ref_id": "b4",
"title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance",
"authors": [
{
"first": "Gorka",
"middle": [],
"last": "References Mikel Artetxe",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2289--2294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving mono- lingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2289-2294.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451-462. ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "789--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 789-798.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vec- tors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Evaluating bilingual word embeddings on the long tail",
"authors": [
{
"first": "Fabienne",
"middle": [],
"last": "Braune",
"suffix": ""
},
{
"first": "Viktor",
"middle": [],
"last": "Hangya",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Eder",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "188--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabienne Braune, Viktor Hangya, Tobias Eder, and Alexander Fraser. 2018. Evaluating bilingual word embeddings on the long tail. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 188-193.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unsupervised multilingual word embeddings",
"authors": [
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "261--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xilun Chen and Claire Cardie. 2018. Unsuper- vised multilingual word embeddings. In Pro- ceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 261-270.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Language Universals and Linguistic Typology: Syntax and Morphology",
"authors": [
{
"first": "",
"middle": [],
"last": "Bernard Comrie",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Comrie. 1989. Language Universals and Linguistic Typology: Syntax and Morphology, University of Chicago Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc' Aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.04087"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc' Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Transgram, fast cross-lingual word-embeddings",
"authors": [
{
"first": "Jocelyn",
"middle": [],
"last": "Coulmance",
"suffix": ""
},
{
"first": "Jean-Marc",
"middle": [],
"last": "Marty",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Amine",
"middle": [],
"last": "Benhalloum",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1109--1113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jocelyn Coulmance, Jean-Marc Marty, Guillaume Wenzek, and Amine Benhalloum. 2015. Trans- gram, fast cross-lingual word-embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1109-1113.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Don't forget the long tail! A comprehensive analysis of morphological generalization in bilingual lexicon induction",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Czarnowska",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "\u00c9douard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNP)",
"volume": "",
"issue": "",
"pages": "973--982",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Czarnowska, Sebastian Ruder,\u00c9douard Grave, Ryan Cotterell, and Ann Copestake. 2019. Don't forget the long tail! A comprehen- sive analysis of morphological generalization in bilingual lexicon induction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNP), pages 973-982.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving zero-shot learning by mitigating the hubness problem",
"authors": [
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6568"
]
},
"num": null,
"urls": [],
"raw_text": "Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness problem. arXiv preprint arXiv:1412.6568.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving cross-lingual word embeddings by meeting in the middle",
"authors": [
{
"first": "Yerai",
"middle": [],
"last": "Doval",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa Anke",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "294--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yerai Doval, Jose Camacho-Collados, Luis Espinosa Anke, and Steven Schockaert. 2018. Improving cross-lingual word embeddings by meeting in the middle. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 294-304. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning crosslingual word embeddings without bilingual corpora",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Kanayama",
"suffix": ""
},
{
"first": "Tengfei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1285--1295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning crosslingual word embeddings without bilin- gual corpora. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1285-1295.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glavas",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Litschko",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vulic",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.00508"
]
},
"num": null,
"urls": [],
"raw_text": "Goran Glavas, Robert Litschko, Sebastian Ruder, and Ivan Vulic. 2019. How to (properly) eval- uate cross-lingual word embeddings: On strong baselines, comparative analyses, and some mis- conceptions. arXiv preprint arXiv:1902.00508.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BilBOWA: Fast bilingual distributed representations without word alignments",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "748--756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast bilingual distributed rep- resentations without word alignments. In Inter- national Conference on Machine Learning, pages 748-756.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Simple task-specific bilingual word embeddings",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1386--1390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws and Anders S\u00f8gaard. 2015. Sim- ple task-specific bilingual word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1386-1390.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning word vectors for 157 languages",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multilingual distributed representations without word alignment",
"authors": [],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.6173"
]
},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann and Phil Blunsom. 2013. Multilingual distributed representations without word alignment. arXiv preprint arXiv:1312.6173.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning unsupervised multilingual word embeddings with incremental multilingual hubs",
"authors": [
{
"first": "Geert",
"middle": [],
"last": "Heyman",
"suffix": ""
},
{
"first": "Bregt",
"middle": [],
"last": "Verreet",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie",
"middle": [
"Francine"
],
"last": "Moens",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1890--1902",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geert Heyman, Bregt Verreet, Ivan Vuli\u0107, and Marie Francine Moens. 2019. Learning unsu- pervised multilingual word embeddings with incremental multilingual hubs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 1890-1902.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning multilingual word embeddings in latent metric space: a geometric approach",
"authors": [
{
"first": "Pratik",
"middle": [],
"last": "Jawanpuria",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Balgovind",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Bamdev",
"middle": [],
"last": "Mishra",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "107--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, and Bamdev Mishra. 2019. Learning multilingual word embeddings in latent metric space: a geometric approach. Transactions of the Association for Compu- tational Linguistics, 7:107-120.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Loss in translation: Learning bilingual word mapping with a retrieval criterion",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2979--2984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv\u00e9 J\u00e9gou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empir- ical Methods in Natural Language Processing, pages 2979-2984.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Panlex: Building a resource for panlingual lexical translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Kamholz",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Pool",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Colowick",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)",
"volume": "",
"issue": "",
"pages": "3145--3150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Kamholz, Jonathan Pool, and Susan Colowick. 2014. Panlex: Building a resource for panlingual lexical translation. In Proceedings of the Ninth International Conference on Lan- guage Resources and Evaluation (LREC-2014), pages 3145-3150.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Generalizing procrustes analysis for better bilingual dictionary induction",
"authors": [
{
"first": "Yova",
"middle": [],
"last": "Kementchedjhieva",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "211--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yova Kementchedjhieva, Sebastian Ruder, Ryan Cotterell, and Anders S\u00f8gaard. 2018. Generalizing procrustes analysis for better bilingual dictionary induction. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 211-220.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning multilingual word representations using a bag-of-words autoencoder",
"authors": [
{
"first": "Stanislas",
"middle": [],
"last": "Lauly",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Boulanger",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1401.1803"
]
},
"num": null,
"urls": [],
"raw_text": "Stanislas Lauly, Alex Boulanger, and Hugo Larochelle. 2014. Learning multilingual word representations using a bag-of-words autoen- coder. arXiv preprint arXiv:1401.1803.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Ethnologue: Languages of the world",
"authors": [
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Gary",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "233--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Paul Lewis and F. Gary. 2015. Simons, and Charles D. Fennig (eds.). 2013. Ethnologue: Languages of the world, pages 233-62.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Syntactic annotations for the google books ngram corpus",
"authors": [
{
"first": "Yuri",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jean-Baptiste",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Erez",
"middle": [],
"last": "Lieberman Aiden",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Orwant",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Brockman",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the ACL 2012 System Demonstrations",
"volume": "",
"issue": "",
"pages": "169--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuri Lin, Jean-Baptiste Michel, Erez Lieberman Aiden, Jon Orwant, Will Brockman, and Slav Petrov. 2012. Syntactic annotations for the google books ngram corpus. In Proceedings of the ACL 2012 System Demonstrations, pages 169-174. ACL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Evaluating resourcelean cross-lingual embedding models in unsupervised retrieval",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Litschko",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vulic",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Dietz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "1109--1112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Litschko, Goran Glava\u0161, Ivan Vulic, and Laura Dietz. 2019. Evaluating resource- lean cross-lingual embedding models in unsu- pervised retrieval. In Proceedings of the 42nd International ACM SIGIR Conference on Re- search and Development in Information Retrieval, pages 1109-1112. ACM.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Bilingual word representations with monolingual quality in mind",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "151--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolingual quality in mind. In Pro- ceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013b. Exploiting similarities among lan- guages for machine translation. arXiv preprint arXiv:1309.4168.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A survey of cross-lingual word embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.04902"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2017. A survey of cross-lingual word embedding models. arXiv preprint arXiv:1706.04902.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks",
"authors": [
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Saxe",
"suffix": ""
},
{
"first": "James",
"middle": [
"L"
],
"last": "Mcclelland",
"suffix": ""
},
{
"first": "Surya",
"middle": [],
"last": "Ganguli",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.6120"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew M. Saxe, James L. McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Ridge regression, hubness, and zero-shot learning",
"authors": [
{
"first": "Yutaro",
"middle": [],
"last": "Shigeto",
"suffix": ""
},
{
"first": "Ikumi",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Kazuo",
"middle": [],
"last": "Hara",
"suffix": ""
},
{
"first": "Masashi",
"middle": [],
"last": "Shimbo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2015,
"venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "135--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yutaro Shigeto, Ikumi Suzuki, Kazuo Hara, Masashi Shimbo, and Yuji Matsumoto. 2015. Ridge regression, hubness, and zero-shot learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 135-151. Springer.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax",
"authors": [
{
"first": "L",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "H",
"middle": [
"P"
],
"last": "David",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Turban",
"suffix": ""
},
{
"first": "Nils",
"middle": [
"Y"
],
"last": "Hamblin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.03859"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Inverted indexing for cross-lingual NLP",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez Alonso",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johannsen",
"suffix": ""
}
],
"year": 2015,
"venue": "The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference of the Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard,\u017deljko Agi\u0107, H\u00e9ctor Mart\u00ednez Alonso, Barbara Plank, Bernd Bohnet, and Anders Johannsen. 2015. Inverted indexing for cross-lingual NLP. In The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference of the Asian Federation of Natural Language Processing (ACL-IJCNLP 2015).",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "On the limitations of unsupervised bilingual dictionary induction",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "778--788",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli\u0107. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778-788.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Do we really need fully unsupervised cross-lingual embeddings?",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4398--4409",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107, Goran Glava\u0161, Roi Reichart, and Anna Korhonen. 2019. Do we really need fully unsupervised cross-lingual embeddings? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4398-4409.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Bilingual distributed word representations from document-aligned comparable data",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Artificial Intelligence Research",
"volume": "55",
"issue": "",
"pages": "953--994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2016. Bilingual distributed word representations from document-aligned comparable data. Journal of Artificial Intelligence Research, 55:953-994.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Unsupervised multilingual word embedding with limited resources using neural language models",
"authors": [
{
"first": "Takashi",
"middle": [],
"last": "Wada",
"suffix": ""
},
{
"first": "Tomoharu",
"middle": [],
"last": "Iwata",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3113--3124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takashi Wada, Tomoharu Iwata, and Yuji Matsumoto. 2019. Unsupervised multilingual word embedding with limited resources using neural language models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3113-3124.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Normalized word embedding and orthogonal transform for bilingual word translation",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiye",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1006--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006-1011.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Bilingual word embeddings for phrase-based machine translation",
"authors": [
{
"first": "Will",
"middle": [
"Y"
],
"last": "Zou",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1393--1398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1393-1398.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Sample language tree representation simplified for illustration purposes (Lewis and Gary, 2015). space s b to space s a . For notation ease, we refer to W s a * \u2190 \u2212s b"
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "otherwise. Example. Consider the set of embeddings for English X en , the transformation that converts embeddings in the English space to the Germanic language family space W s ge * \u2190 \u2212s en , and the English embeddings transformed to the Germanic space W s ge * \u2190 \u2212s en X en . HCEG makes it so that W s ge * \u2190 \u2212s en X en and W s ge * \u2190 \u2212s de X de (the transformed embeddings of English and German) are in the same Germanic embedding space, while W s in * \u2190 \u2212s en X en and W s in * \u2190 \u2212s es X es (the transformed embeddings of English and Spanish) are in the same Indo-European embedding space."
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Results on the MUSE dataset. Scores from"
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"2\">Pivot Language Family</td><td>Afro-Asiatic</td><td>Austronesian</td><td>Indo-European/Balto-Slavic</td><td>Indo-European/Germanic</td><td>Indo-European/Indo-Iranian</td><td>Indo-European/Italic</td><td>Sino-tibetan</td><td>Turkic</td><td>Uralic</td><td>Avg.</td></tr><tr><td>en</td><td>Indo-European/Germanic</td><td colspan=\"10\">27.3 28.7 32.1 39.8 31.4 40.4 27.3 26.9 28.3 31.4</td></tr><tr><td>arz</td><td>Afro-Asiatic</td><td colspan=\"10\">30.2 27.1 28.1 32.1 28.3 33.4 25.1 23.4 27.1 28.3</td></tr><tr><td>id</td><td>Austronesian</td><td colspan=\"10\">27.1 30.3 27.7 31.1 28.3 32.5 25.8 24.6 27.6 28.3</td></tr><tr><td>ru</td><td colspan=\"11\">Indo-European/Balto-Slavic 26.3 26.3 34.2 38.2 28.5 37.3 24.6 22.5 26.8 29.4</td></tr><tr><td>de</td><td>Indo-European/Germanic</td><td colspan=\"10\">25.1 26.9 25.1 37.6 27.3 37.2 24.7 23.7 25.6 28.1</td></tr><tr><td>hi</td><td colspan=\"11\">Indo-European/Indo-Iranian 26.3 27.1 26.1 33.7 32.3 34.2 23.4 25.6 26.4 28.3</td></tr><tr><td>es</td><td>Indo-European/Italic</td><td colspan=\"10\">26.9 26.7 30.6 38.5 31.0 41.5 26.8 26.7 28.4 30.8</td></tr><tr><td>pt</td><td>Indo-European/Italic</td><td colspan=\"10\">26.0 26.6 30.4 37.9 27.7 41.3 25.9 26.4 26.5 29.9</td></tr><tr><td>zh</td><td>Sino-Tibetan</td><td colspan=\"10\">25.1 27.3 25.3 23.4 26.1 24.8 29.3 25.7 27.6 26.1</td></tr><tr><td>tr</td><td>Turkic</td><td colspan=\"10\">24.9 25.3 25.5 28.2 27.8 28.6 25.3 28.7 27.3 26.8</td></tr><tr><td>hu</td><td>Uralic</td><td colspan=\"10\">25.4 25.8 25.8 31.8 26.4 32.8 25.5 21.9 30.1 27.3</td></tr></table>",
"text": "Best (B), worst (W), and English mapping ranking (E) for each language (L)."
}
}
}
}