ACL-OCL / Base_JSON /prefixD /json /D18 /D18-1024.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:48:20.812023Z"
},
"title": "Unsupervised Multilingual Word Embeddings",
"authors": [
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell Unversity Ithaca",
"location": {
"postCode": "14853",
"region": "NY",
"country": "USA"
}
},
"email": "xlchen@cs.cornell.edu"
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell Unversity Ithaca",
"location": {
"postCode": "14853",
"region": "NY",
"country": "USA"
}
},
"email": "cardie@cs.cornell.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multilingual Word Embeddings (MWEs) represent words from multiple languages in a single distributional vector space. Unsupervised MWE (UMWE) methods acquire multilingual embeddings without cross-lingual supervision, which is a significant advantage over traditional supervised approaches and opens many new possibilities for low-resource languages. Prior art for learning UMWEs, however, merely relies on a number of independently trained Unsupervised Bilingual Word Embeddings (UBWEs) to obtain multilingual embeddings. These methods fail to leverage the interdependencies that exist among many languages. To address this shortcoming, we propose a fully unsupervised framework for learning MWEs 1 that directly exploits the relations between all language pairs. Our model substantially outperforms previous approaches in the experiments on multilingual word translation and cross-lingual word similarity. In addition, our model even beats supervised approaches trained with cross-lingual resources.",
"pdf_parse": {
"paper_id": "D18-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "Multilingual Word Embeddings (MWEs) represent words from multiple languages in a single distributional vector space. Unsupervised MWE (UMWE) methods acquire multilingual embeddings without cross-lingual supervision, which is a significant advantage over traditional supervised approaches and opens many new possibilities for low-resource languages. Prior art for learning UMWEs, however, merely relies on a number of independently trained Unsupervised Bilingual Word Embeddings (UBWEs) to obtain multilingual embeddings. These methods fail to leverage the interdependencies that exist among many languages. To address this shortcoming, we propose a fully unsupervised framework for learning MWEs 1 that directly exploits the relations between all language pairs. Our model substantially outperforms previous approaches in the experiments on multilingual word translation and cross-lingual word similarity. In addition, our model even beats supervised approaches trained with cross-lingual resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Continuous distributional word representations (Turian et al., 2010) have become a common technique across a wide variety of NLP tasks. Recent research, moreover, proposes cross-lingual word representations (Klementiev et al., 2012; Mikolov et al., 2013a) that create a shared embedding space for words across two (Bilingual Word Embeddings, BWE) or more languages (Multilingual Word Embeddings, MWE) . Words from different languages with similar meanings will be close to one another in this cross-lingual embedding space. These embeddings have been found beneficial for a number of cross-lingual and even monolingual NLP tasks (Faruqui and Dyer, 2014; Ammar et al., 2016) .",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "(Turian et al., 2010)",
"ref_id": "BIBREF26"
},
{
"start": 207,
"end": 232,
"text": "(Klementiev et al., 2012;",
"ref_id": "BIBREF16"
},
{
"start": 233,
"end": 255,
"text": "Mikolov et al., 2013a)",
"ref_id": "BIBREF20"
},
{
"start": 365,
"end": 400,
"text": "(Multilingual Word Embeddings, MWE)",
"ref_id": null
},
{
"start": 629,
"end": 653,
"text": "(Faruqui and Dyer, 2014;",
"ref_id": "BIBREF13"
},
{
"start": 654,
"end": 673,
"text": "Ammar et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most common form of cross-lingual word representations is the BWE, which connects the lexical semantics of two languages. Traditionally for training BWEs, cross-lingual supervision is required, either in the form of parallel corpora (Klementiev et al., 2012; Zou et al., 2013) , or in the form of bilingual lexica (Mikolov et al., 2013a; Xing et al., 2015) . This makes learning BWEs for low-resource language pairs much more difficult. Fortunately, there are attempts to reduce the dependence on bilingual supervision by requiring a very small parallel lexicon such as identical character strings (Smith et al., 2017) , or numerals (Artetxe et al., 2017) . Furthermore, recent work proposes approaches to obtain unsupervised BWEs without relying on any bilingual resources (Zhang et al., 2017; Lample et al., 2018b) .",
"cite_spans": [
{
"start": 237,
"end": 262,
"text": "(Klementiev et al., 2012;",
"ref_id": "BIBREF16"
},
{
"start": 263,
"end": 280,
"text": "Zou et al., 2013)",
"ref_id": "BIBREF30"
},
{
"start": 318,
"end": 341,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF20"
},
{
"start": 342,
"end": 360,
"text": "Xing et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 602,
"end": 622,
"text": "(Smith et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 637,
"end": 659,
"text": "(Artetxe et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 778,
"end": 798,
"text": "(Zhang et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 799,
"end": 820,
"text": "Lample et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast to BWEs that only focus on a pair of languages, MWEs instead strive to leverage the interdependencies among multiple languages to learn a multilingual embedding space. MWEs are desirable when dealing with multiple languages simultaneously and have also been shown to improve the performance on some bilingual tasks thanks to its ability to acquire knowledge from other languages (Ammar et al., 2016; Duong et al., 2017) . Similar to training BWEs, cross-lingual supervision is typically needed for training MWEs, and the prior art for obtaining fully unsupervised MWEs simply maps all the languages independently to the embedding space of a chosen target language 2 (usually English) (Lample et al., 2018b) . There are downsides, however, when using a single fixed target language with no interaction between any of the two source languages. For instance, French and Italian are very similar, and the fact that each of them is individually converted to a less similar language, English for example, in order to produce a shared embedding space will inevitably degrade the quality of the MWEs.",
"cite_spans": [
{
"start": 391,
"end": 411,
"text": "(Ammar et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 412,
"end": 431,
"text": "Duong et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 696,
"end": 718,
"text": "(Lample et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For certain multilingual tasks such as translating between any pair of N given languages, another option for obtaining UMWEs exists. One can directly train UBWEs for each of such language pairs (referred to as BWE-Direct). This is seldom used in practice, since it requires training O(N 2 ) BWE models as opposed to only O(N ) in BWE-Pivot, and is too expensive for most use cases. Moreover, this method still does not fully exploit the language interdependence. For example, when learning embeddings between French and Italian, BWE-Direct only utilizes information from the pair itself, but other Romance languages such as Spanish may also provide valuable information that could improve performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we propose a novel unsupervised algorithm to train MWEs using only monolingual corpora (or equivalently, monolingual word embeddings). Our method exploits the interdependencies between any two languages and maps all monolingual embeddings into a shared multilingual embedding space via a two-stage algorithm consisting of (i) Multilingual Adversarial Training (MAT) and (ii) Multilingual Pseudo-Supervised Refinement (MPSR). As shown by experimental results on multilingual word translation and crosslingual word similarity, our model is as efficient as BWE-Pivot yet outperforms both BWE-Pivot and BWE-Direct despite the latter being much more expensive. In addition, our model achieves a higher overall performance than state-of-the-art supervised methods in these experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is a plethora of literature on learning crosslingual word representations, focusing either on a pair of languages, or multiple languages at the same time (Klementiev et al., 2012; Zou et al., 2013; Mikolov et al., 2013a; Gouws et al., 2015; Coulmance et al., 2015; Ammar et al., 2016; Duong et al., 2017, inter alia) . One shortcoming of these methods is the dependence on crosslingual supervision such as parallel corpora or bilingual lexica. Abundant research efforts have been made to alleviate such dependence (Vuli\u0107 and Moens, 2015; Artetxe et al., 2017; Smith et al., 2017) , but consider only the case of a single pair of languages (BWEs). Furthermore, fully unsupervised methods exist for learning BWEs (Zhang et al., 2017; Lample et al., 2018b; Artetxe et al., 2018a) . For unsupervised MWEs, however, previous methods merely rely on a number of independent BWEs to separately map each language into the embedding space of a chosen target language (Smith et al., 2017; Lample et al., 2018b) .",
"cite_spans": [
{
"start": 160,
"end": 185,
"text": "(Klementiev et al., 2012;",
"ref_id": "BIBREF16"
},
{
"start": 186,
"end": 203,
"text": "Zou et al., 2013;",
"ref_id": "BIBREF30"
},
{
"start": 204,
"end": 226,
"text": "Mikolov et al., 2013a;",
"ref_id": "BIBREF20"
},
{
"start": 227,
"end": 246,
"text": "Gouws et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 247,
"end": 270,
"text": "Coulmance et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 271,
"end": 290,
"text": "Ammar et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 291,
"end": 322,
"text": "Duong et al., 2017, inter alia)",
"ref_id": null
},
{
"start": 520,
"end": 543,
"text": "(Vuli\u0107 and Moens, 2015;",
"ref_id": "BIBREF27"
},
{
"start": 544,
"end": 565,
"text": "Artetxe et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 566,
"end": 585,
"text": "Smith et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 717,
"end": 737,
"text": "(Zhang et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 738,
"end": 759,
"text": "Lample et al., 2018b;",
"ref_id": "BIBREF18"
},
{
"start": 760,
"end": 782,
"text": "Artetxe et al., 2018a)",
"ref_id": "BIBREF3"
},
{
"start": 963,
"end": 983,
"text": "(Smith et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 984,
"end": 1005,
"text": "Lample et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Adversarial Neural Networks have been successfully applied to various cross-lingual NLP tasks where annotated data is not available, such as cross-lingual text classification (Chen et al., 2016) , unsupervised BWE induction (Zhang et al., 2017; Lample et al., 2018b) and unsupervised machine translation (Lample et al., 2018a; Artetxe et al., 2018b) . These works, however, only consider the case of two languages, and our MAT method ( \u00a73.1) is a generalization to multiple languages. Mikolov et al. (2013a) first propose to learn cross-lingual word representations by learning a linear mapping between the monolingual embedding spaces of a pair of languages. It has then been observed that enforcing the linear mapping to be orthogonal could significantly improve performance (Xing et al., 2015; Artetxe et al., 2016; Smith et al., 2017) . These methods solve a linear equation called the orthogonal Procrustes problem for the optimal orthogonal linear mapping between two languages, given a set of word pairs as supervision. Artetxe et al. (2017) find that when using weak supervision (e.g. digits in both languages), applying this Procrustes process iteratively achieves higher performance. Lample et al. (2018b) adopt the iterative Procrustes method with pseudo-supervision in a fully unsupervised setting and also obtain good results. In the MWE task, however, the multilingual mappings no longer have a closed-form solution, and we hence propose the MPSR algorithm ( \u00a73.2) for learning multilingual embeddings using gradient-based optimization methods.",
"cite_spans": [
{
"start": 175,
"end": 194,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 224,
"end": 244,
"text": "(Zhang et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 245,
"end": 266,
"text": "Lample et al., 2018b)",
"ref_id": "BIBREF18"
},
{
"start": 304,
"end": 326,
"text": "(Lample et al., 2018a;",
"ref_id": "BIBREF17"
},
{
"start": 327,
"end": 349,
"text": "Artetxe et al., 2018b)",
"ref_id": "BIBREF4"
},
{
"start": 485,
"end": 507,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF20"
},
{
"start": 777,
"end": 796,
"text": "(Xing et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 797,
"end": 818,
"text": "Artetxe et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 819,
"end": 838,
"text": "Smith et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 1027,
"end": 1048,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF2"
},
{
"start": 1194,
"end": 1215,
"text": "Lample et al. (2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this work, our goal is to learn a single multilingual embedding space for N languages, without relying on any cross-lingual supervision. We assume that we have access to monolingual embeddings for each of the N languages, which can be obtained using unlabeled monolingual corpora (Mikolov et al., 2013b; . We now present our unsupervised MWE (UMWE) model that jointly maps the monolingual embeddings of all N languages into a single space by explicitly leveraging the interdependencies between arbitrary language pairs, but is computationally as efficient as learning O(N ) BWEs (instead of O(N 2 )).",
"cite_spans": [
{
"start": 283,
"end": 306,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Denote the set of languages as L with |L | = N . Suppose for each language l \u2208 L with vocabulary V l , we have a set of d-dimensional monolingual word embeddings E l of size |V l | \u00d7 d. Let S l denote the monolingual embedding space for l, namely the distribution of the monolingual embeddings of l. If a set of embeddings E are in an embedding space S, we write E S (e.g. \u2200l : E l S l ). Our models learns a set of encoders M l , one for each language l, and the corresponding decoders M \u22121 l . The encoders map all E l to a single target space T :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "M l (E l ) T . On the other hand, a decoder M \u22121 l maps an embedding in T back to S l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Previous research (Mikolov et al., 2013a) shows that there is a strong linear correlation between the vector spaces of two languages, and that learning a complex non-linear neural mapping does not yield better results. Xing et al. (2015) further show that enforcing the linear mappings to be orthogonal matrices achieves higher performance. Therefore, we let our encoders M l be orthogonal linear matrices, and the corresponding decoders can be obtained by simply taking the transpose:",
"cite_spans": [
{
"start": 18,
"end": 41,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF20"
},
{
"start": 219,
"end": 237,
"text": "Xing et al. (2015)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "M \u22121 l = M l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Thus, applying the encoder or decoder to an embedding vector is accomplished by multiplying the vector with the encoder/decoder matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Another benefit of using linear encoders and decoders (also referred to as mappings) is that we can learn N \u2212 1 mappings instead of N by choosing the target space T to be the embedding space of a specific language (denoted as the target language) without losing any expressiveness of the model. Given a MWE with an arbitrary T , we can construct an equivalent one with only N \u22121 mappings by multiplying the encoders of each language M l to the decoder of the chosen target language M t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "M t = M t M t = I M l E l = (M t M l )E l S t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "where I is the identity matrix. The new MWE is isomorphic to the original one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We now present the two major components of our approach, Multilingual Adversarial Training ( \u00a73.1) and Multilingual Pseudo-Supervised Refinement ( \u00a73.2). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "D j M i M \u22a4 j J D j J M i lang i lang i lang j lang j lang j Figure 1: Multilingual Adversarial Training (Algo- rithm 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": ". lang i and lang j are two randomly selected languages at each training step. J Dj and J Mi are the objectives of D j and M i , respectively (Eqn. 1 and 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "In this section, we introduce an adversarial training approach for learning multilingual embeddings without cross-lingual supervision. Adversarial Training is a powerful technique for minimizing the divergence between complex distributions that are otherwise difficult to directly model (Goodfellow et al., 2014) . In the crosslingual setting, it has been successfully applied to unsupervised cross-lingual text classification (Chen et al., 2016) and unsupervised bilingual word embedding learning (Zhang et al., 2017; Lample et al., 2018b) . However, these methods only consider one pair of languages at a time, and do not fully exploit the cross-lingual relations in the multilingual setting. Figure 1 shows our Multilingual Adversarial Training (MAT) model and the training procedure is described in Algorithm 1. Note that as explained in \u00a73, the encoders and decoders adopted in practice are orthogonal linear mappings while the shared embedding space is chosen to be the same space as a selected target language.",
"cite_spans": [
{
"start": 287,
"end": 312,
"text": "(Goodfellow et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 427,
"end": 446,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 498,
"end": 518,
"text": "(Zhang et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 519,
"end": 540,
"text": "Lample et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 695,
"end": 703,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multilingual Adversarial Training",
"sec_num": "3.1"
},
{
"text": "Require: Vocabulary Vi for each language lang i \u2208 L . Hyperparameter k \u2208 N. 1: repeat 2: D iterations 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Multilingual Adversarial Training",
"sec_num": null
},
{
"text": "for diter = 1 to k do 4: In order to learn a multilingual embedding space without supervision, we employ a series of language discriminators D l , one for each language l \u2208 L . Each D l is a binary classifier with a sigmoid layer on top, and is trained to identify how likely a given vector is from S l , the embedding space of language l. On the other hand, to train the mappings, we convert a vector from a random language lang i to another random language lang j (via the target space T first). The objective of the mappings is to confuse D j , the language discriminator for lang j , so the mappings are updated in a way that D j cannot differentiate the converted vectors from the real vectors in S j . This multilingual objective enables us to explicitly exploit the relations between all language pairs during training, leading to improved performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Multilingual Adversarial Training",
"sec_num": null
},
{
"text": "loss d = 0 5: for all lang j \u2208 L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Multilingual Adversarial Training",
"sec_num": null
},
{
"text": "Formally, for any language lang j , the objective that D j is minimizing is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Multilingual Adversarial Training",
"sec_num": null
},
{
"text": "J D j = E i\u223cL E x i \u223cS i x j \u223cS j L d (1, D j (x j )) + L d 0, D j (M j M i x i ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Multilingual Adversarial Training",
"sec_num": null
},
{
"text": "where L d (y,\u0177) is the loss function of D, which is chosen as the cross entropy loss in practice. y is the language label with y = 1 indicates a real embedding from that language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Multilingual Adversarial Training",
"sec_num": null
},
{
"text": "Furthermore, the objective of M i for lang i is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Multilingual Adversarial Training",
"sec_num": null
},
{
"text": "J M i = E j\u223cL E x i \u223cS i x j \u223cS j L d 1, D j (M j M i x i ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Multilingual Adversarial Training",
"sec_num": null
},
{
"text": "where M i strives to make D j believe that a converted vector to lang j is instead real. This adversarial relation between M and D stimulates M to learn a shared multilingual embedding space by making the converted vectors look as authentic as possible so that D cannot predict whether a vector is a genuine embedding from a certain language or converted from another language via M.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Multilingual Adversarial Training",
"sec_num": null
},
{
"text": "In addition, we allow lang i and lang j to be the same language in (1) and (2). In this case, we are encoding a language to T and back to itself, essentially forming an adversarial autoencoder (Makhzani et al., 2015) , which is reported to improve the model performance (Zhang et al., 2017) . Finally, on Line 5 and 17 in Algorithm 1, a for loop is used instead of random sampling. This is to ensure that in each step, every discriminator (or mapping) is getting updated at least once, so that we do not need to increase the number of training iterations when adding more languages. Computationally, when compared to the BWE-Pivot and BWE-Direct baselines, one step of MAT training costs similarly to N BWE training steps, and in practice we train MAT for the same number of iterations as training the baselines. Therefore, MAT training scales linearly with the number of languages similar to BWE-Pivot (instead of quadratically as in BWE-Direct).",
"cite_spans": [
{
"start": 193,
"end": 216,
"text": "(Makhzani et al., 2015)",
"ref_id": "BIBREF19"
},
{
"start": 270,
"end": 290,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Multilingual Adversarial Training",
"sec_num": null
},
{
"text": "Using MAT, we are able to obtain UMWEs with reasonable quality, but they do not yet achieve state-of-the-art performance. Previous research on learning unsupervised BWEs (Lample et al., 2018b) observes that the embeddings obtained from adversarial training do a good job aligning the frequent words between two languages, but performance degrades when considering the full vocabulary. They hence propose to use an iterative refinement method (Artetxe et al., 2017) to repeatedly refine the embeddings obtained from the adversarial training. The idea is that we can anchor on the more accurately predicted relations between frequent words to improve the mappings learned by adversarial training.",
"cite_spans": [
{
"start": 170,
"end": 192,
"text": "(Lample et al., 2018b)",
"ref_id": "BIBREF18"
},
{
"start": 442,
"end": 464,
"text": "(Artetxe et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "Algorithm 2 Multilingual Pseudo-Supervised Refinement",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "Require: A set of (pseudo-)supervised lexica of word pairs between each pair of languages Lex(lang i , lang j ). 1: repeat 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "loss = 0 3: for all lang i \u2208 L do 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "Select at random lang j \u2208 L 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "Sample (xi, xj) \u223c Lex(lang i , lang j ) 6: ti = Mi(xi) encode xi 7: tj = Mj(xj) encode xj 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "loss += Lr(ti, tj) refinement loss 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "Update all M parameters to minimize loss 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "orthogonalize(M) see \u00a73.3 11: until convergence When learning MWEs, however, it is desirable to go beyond aligning each language with the target space individually, and instead utilize the relations between all languages as we did in MAT. Therefore, we in this section propose a generalization of the existing refinement methods to incorporate a multilingual objective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "In particular, MAT can produce an approximately aligned embedding space. As mentioned earlier, however, the training signals from D for rare words are noisier and may lead to worse performance. Thus, the idea of Multilingual Pseudo-Supervised Refinement (MPSR) is to induce a dictionary of highly confident word pairs for every language pair, used as pseudo supervision to improve the embeddings learned by MAT. For a specific language pair (lang i , lang j ), the pseudo-supervised lexicon Lex(lang i , lang j ) is constructed from mutual nearest neighbors between M i E i and M j E j , among the most frequent 15k words of both languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "With the constructed lexica, the MPSR objective is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "J r = E (i,j)\u223cL 2 E (x i ,x j )\u223cLex(i,j) L r (M i x i , M j x j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "(3) where L r (x,x) is the loss function for MPSR, for which we use the mean square loss. The MPSR training is depicted in Algorithm 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Pseudo-Supervised Refinement",
"sec_num": "3.2"
},
{
"text": "When constructing the pseudo-supervised lexica, a distance metric between embeddings is needed to compute nearest neighbors. Standard distance metrics such as the Euclidean distance or cosine similarity, however, can lead to the hubness problem in high-dimensional spaces when used to calculate nearest neighbors (Radovanovi\u0107 et al., 2010; Dinu and Baroni, 2015) . Namely, some words are very likely to be the nearest neighbors of many others (hubs), while others are not the nearest neighbor of any word. This problem is addressed in the literature by designing alternative distance metrics, such as the inverted softmax (Smith et al., 2017) or the CSLS (Lample et al., 2018b) . In this work, we adopt the CSLS similarity as a drop-in replacement for cosine similarity whenever a distance metric is needed. The CSLS similarity (whose negation is a distance metric) is calculated as follows:",
"cite_spans": [
{
"start": 313,
"end": 339,
"text": "(Radovanovi\u0107 et al., 2010;",
"ref_id": "BIBREF22"
},
{
"start": 340,
"end": 362,
"text": "Dinu and Baroni, 2015)",
"ref_id": "BIBREF11"
},
{
"start": 622,
"end": 642,
"text": "(Smith et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 655,
"end": 677,
"text": "(Lample et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Similarity Scaling (CSLS)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "CSLS(x, y) = 2 cos(x, y) \u2212 1 n y \u2208N Y (x) cos(x, y ) \u2212 1 n x \u2208N X (y) cos(x , y)",
"eq_num": "(4)"
}
],
"section": "Cross-Lingual Similarity Scaling (CSLS)",
"sec_num": null
},
{
"text": "where N Y (x) is the set of n nearest neighbors of x in the vector space that y comes from: Y = {y 1 , ..., y |Y | }, and vice versa for N X (y). In practice, we use n = 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Similarity Scaling (CSLS)",
"sec_num": null
},
{
"text": "As mentioned in \u00a73, orthogonal linear mappings are the preferred choice when learning transformations between the embedding spaces of different languages (Xing et al., 2015; Smith et al., 2017) . Therefore, we perform an orthogonalization update (Cisse et al., 2017) after each training step to ensure that our mappings M are (approximately) orthogonal:",
"cite_spans": [
{
"start": 154,
"end": 173,
"text": "(Xing et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 174,
"end": 193,
"text": "Smith et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 246,
"end": 266,
"text": "(Cisse et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Orthogonalization",
"sec_num": "3.3"
},
{
"text": "\u2200l : M l = (1 + \u03b2)M l \u2212 \u03b2M l M l M l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Orthogonalization",
"sec_num": "3.3"
},
{
"text": "where \u03b2 is set to 0.001.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Orthogonalization",
"sec_num": "3.3"
},
{
"text": "In order to do model selection in the unsupervised setting, where no validation set can be used, a surrogate validation criterion is required that does not depend on bilingual data. Previous work shows promising results using such surrogate criteria for model validation in the bilingual case (Lample et al., 2018b) , and we in this work adopt a variant adapted to our multilingual setting:",
"cite_spans": [
{
"start": 293,
"end": 315,
"text": "(Lample et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Multilingual Validation",
"sec_num": "3.4"
},
{
"text": "V (M, E) = E (i,j)\u223cP ij mean csls(M j M i E i , E j ) = i =j p ij \u2022 mean csls(M j M i E i , E j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Multilingual Validation",
"sec_num": "3.4"
},
{
"text": "where p ij forms a probability simplex. In this work, we let all p ij = 1 N (N \u22121) so that V (M, E) reduces to the macro average over all language pairs. Using different p ij values can place varying weights on different language pairs, which might be desirable in certain scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Multilingual Validation",
"sec_num": "3.4"
},
{
"text": "The mean csls function is an unsupervised bilingual validation criterion proposed by Lample et al. (2018b) , which is the mean CSLS similarities between the most frequent 10k words and their translations (nearest neighbors).",
"cite_spans": [
{
"start": 85,
"end": 106,
"text": "Lample et al. (2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Multilingual Validation",
"sec_num": "3.4"
},
{
"text": "In this section, we present experimental results to demonstrate the effectiveness of our unsupervised MWE method on two benchmark tasks, the multilingual word translation task, and the SemEval-2017 cross-lingual word similarity task. We compare our MAT+MPSR method with state-of-theart unsupervised and supervised approaches, and show that ours outperforms previous methods, supervised or not, on both tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Pre-trained 300d fastText (monolingual) embeddings trained on the Wikipedia corpus are used for all systems that require monolingual word embeddings for learning cross-lingual embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In this section, we consider the task of word translation between arbitrary pairs of a set of N languages. To this end, we use the recently released multilingual word translation dataset on six languages: English, French, German, Italian, Portuguese and Spanish (Lample et al., 2018b) . For any pair of the six languages, a ground-truth bilingual dictionary is provided with a train-test split of 5000 and 1500 unique source words, respectively. The 5k training pairs are used in training supervised baseline methods, while all unsupervised methods do not rely on any cross-lingual resources. All systems are tested on the 1500 test word pairs for each pair of languages.",
"cite_spans": [
{
"start": 262,
"end": 284,
"text": "(Lample et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Word Translation",
"sec_num": "4.1"
},
{
"text": "For comparison, we adopted a state-of-the-art unsupervised BWE method (Lample et al., 2018b) and generalize it for the multilingual setting using the two aforementioned approaches, namely BWE-Pivot and BWE-Direct, to produce unsupervised baseline MWE systems. English is chosen as the pivot language in BWE-Pivot. We further incorporate the supervised BWE-Direct (Sup-BWE-Direct) method as a baseline, where each BWE is trained on the 5k gold-standard word pairs via the orthogonal Procrustes process (Artetxe et al., 2017; Lample et al., 2018b) . Table 1 presents the evaluation results, wherein the numbers represent precision@1, namely how many times one of the correct translations of a source word is retrieved as the top candidate. All systems retrieve word translations using the CSLS similarity in the learned embedding space. Table 1a shows the detailed results for all 30 language pairs, while Table 1b summarizes the results in a number of ways. We first observe the training cost of all systems summarized in Table 1b. #BWEs indicates the training cost of a certain method measured by how many BWE models it is equivalent to train. BWE-Pivot needs to train 2(N \u22121) BWEs since a separate BWE is trained for each direction in a language pair for increased performance. BWE-Direct on the other hand, trains an individual BWE for all (again, directed) pairs, resulting a total of N (N \u22121) BWEs. The supervised Sup-BWE-Direct method trains the same number of BWEs as BWE-Direct but is much faster in practice, for it does not require the unsupervised adversarial training stage. Finally, while our MAT+MPSR method does not train independent BWEs, as argued in \u00a73.1, the training cost is roughly equivalent to training N \u22121 BWEs, which is corroborated by the real training time shown in Table 1b .",
"cite_spans": [
{
"start": 70,
"end": 92,
"text": "(Lample et al., 2018b)",
"ref_id": "BIBREF18"
},
{
"start": 501,
"end": 523,
"text": "(Artetxe et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 524,
"end": 545,
"text": "Lample et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 548,
"end": 555,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 904,
"end": 912,
"text": "Table 1b",
"ref_id": "TABREF2"
},
{
"start": 1793,
"end": 1801,
"text": "Table 1b",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Multilingual Word Translation",
"sec_num": "4.1"
},
{
"text": "We can see in Table 1a that our MAT+MPSR method achieves the highest performance on all but 3 language pairs, compared against both the unsupervised and supervised approaches. When looking at the overall performance across all language pairs, BWE-Direct achieves a +0.6% performance gain over BWE-Pivot at the cost of being much slower to train. When supervision is available, Sup-BWE-Direct further improves another 0.4% over BWE-Direct. Our MAT+MPSR method, however, attains an impressive 1.3% improvement against Sup-BWE-Direct, despite the lack of cross-lingual supervision.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 22,
"text": "Table 1a",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Multilingual Word Translation",
"sec_num": "4.1"
},
{
"text": "To provide a more in-depth examination of the results, we first consider the Romance language pairs, such as fr-es, fr-it, fr-pt, es-it, it-pt and their reverse directions. BWE-Pivot performs notably worse than BWE-Direct on these pairs, which validates our hypothesis that going through a less similar language (English) when translating between en-de en-fr en-es en-it en-pt de-fr de-es de-it de-pt fr-es fr-it fr-pt es-it es-pt it-pt similar languages will result in reduced accuracy. Our MAT+MPSR method, however, overcomes this disadvantage of BWE-Pivot and achieves the best performance on all these pairs through an explicit multilingual learning mechanism without increasing the computational cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Word Translation",
"sec_num": "4.1"
},
{
"text": "Furthermore, our method also beats the BWE-Direct approach, which supports our second hypothesis that utilizing knowledge from languages beyond the pair itself could improve performance. For instance, there are a few pairs where BWE-Pivot outperforms BWE-Direct, such as de-it, itde and pt-de, even though it goes through a third language (English) in BWE-Pivot. This might suggest that for some less similar language pairs, leveraging a third language as a bridge could in some cases work better than only relying on the language pair itself. German is involved in all these language pairs where BWE-Pivot outperforms than BWE-Direct, which is potentially due to the similarity between German and the pivot language English. We speculate that if choosing a different pivot language, there might be other pairs that could benefit. This observation serves as a possible explanation of the superior performance of our multilingual method over BWE-Direct, since our method utilizes knowledge from all languages during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Word Translation",
"sec_num": "4.1"
},
{
"text": "In this section, we evaluate the quality of our MWEs on the cross-lingual word similarity (CLWS) task, which assesses how well the similarity in the cross-lingual embedding space corresponds to a human-annotated semantic similarity score. The high-quality CLWS dataset from SemEval-2017 (Camacho-Collados et al., 2017) is en-de en-es de-es en-it de-it es-it en-fa de-fa es-fa it-fa Average Table 2 : Results for the SemEval-2017 Cross-Lingual Word Similarity task. Spearman's \u03c1 is reported. Luminoso (Speer and Lowry-Duda, 2017) and NASARI (Camacho-Collados et al., 2016) are the two top-performing systems for SemEval-2017 that reported results on all language pairs. used for evaluation. The dataset contains word pairs from any two of the five languages: English, German, Spanish, Italian, and Farsi (Persian), annotated with semantic similarity scores. In addition to the BWE-Pivot and BWE-Direct baseline methods, we also include the two best-performing systems on SemEval-2017, Luminoso (Speer and Lowry-Duda, 2017) and NASARI (Camacho-Collados et al., 2016) for comparison. Note that these two methods are supervised, and have access to the Europarl 3 (for all languages but Farsi) and the OpenSubtitles2016 4 parallel corpora. Table 2 shows the results, where the performance of each model is measured by the Spearman correlation. When compared to the BWE-Pivot and the BWE-Direct baselines, MAT+MPSR continues to perform the best on all language pairs. The qualitative findings stay the same as in the word translation task, except the margin is less significant. This might be because the CLWS task is much more lenient compared to the word translation task, where in the latter one needs to correctly identify the translation of a word out of hundreds of thousands of words in the vocabulary. In CLWS though, one can still achieve relatively high correlation in spite of minor inaccuracies.",
"cite_spans": [
{
"start": 500,
"end": 528,
"text": "(Speer and Lowry-Duda, 2017)",
"ref_id": "BIBREF25"
},
{
"start": 540,
"end": 571,
"text": "(Camacho-Collados et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 993,
"end": 1021,
"text": "(Speer and Lowry-Duda, 2017)",
"ref_id": "BIBREF25"
},
{
"start": 1033,
"end": 1064,
"text": "(Camacho-Collados et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 390,
"end": 397,
"text": "Table 2",
"ref_id": null
},
{
"start": 1235,
"end": 1242,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross-Lingual Word Similarity",
"sec_num": "4.2"
},
{
"text": "On the other hand, an encouraging result is that when compared to the state-of-the-art supervised results, our MAT+MPSR method outperforms NASARI by a very large margin, and achieves top-notch overall performance similar to the competition winner, Luminoso, without using any bitexts. A closer examination reveals that our unsupervised method lags a few points behind Lumi-3 http://opus.nlpl.eu/Europarl.php 4 http://opus.nlpl.eu/ OpenSubtitles2016.php noso on the European languages wherein the supervised methods have access to the large-scale high-quality Europarl parallel corpora. It is the low-resource language, Farsi, that makes our unsupervised method stand out. All of the unsupervised methods outperform the supervised systems from SemEval-2017 on language pairs involving Farsi, which is not covered by the Europarl bitexts. This suggests the advantage of learning unsupervised embeddings for lower-resourced languages, where the supervision might be noisy or absent. Furthermore, within the unsupervised methods, MAT+MPSR again performs the best, and attains a higher margin over the baseline approaches on the low-resource language pairs, vindicating our claim of better multilingual performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Lingual Word Similarity",
"sec_num": "4.2"
},
{
"text": "In this work, we propose a fully unsupervised model for learning multilingual word embeddings (MWEs). Although methods exist for learning high-quality unsupervised BWEs (Lample et al., 2018b) , little work has been done in the unsupervised multilingual setting. Previous work relies solely on a number of unsupervised BWE models to generate MWEs (e.g. BWE-Pivot and BWE-Direct), which does not fully leverage the interdependencies among all the languages. Therefore, we propose the MAT+MPSR method that explicitly exploits the relations between all language pairs without increasing the computational cost. In our experiments on multilingual word translation and cross-lingual word similarity (SemEval-2017), we show that MAT+MPSR outperforms existing unsupervised and even supervised models, achieving new state-of-the-art performance.",
"cite_spans": [
{
"start": 169,
"end": 191,
"text": "(Lample et al., 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "For future work, we plan to investigate how our method can be extended to work with other BWE frameworks, in order to overcome the instability issue of Lample et al. (2018b) . As pointed out by recent work (S\u00f8gaard et al., 2018; Artetxe et al., 2018a) , the method by Lample et al. (2018b) performs much worse on certain languages such as Finnish, etc. More reliable multilingual embeddings might be obtained on these languages if we adapt our multilingual training framework to work with the more robust methods proposed recently.",
"cite_spans": [
{
"start": 152,
"end": 173,
"text": "Lample et al. (2018b)",
"ref_id": "BIBREF18"
},
{
"start": 206,
"end": 228,
"text": "(S\u00f8gaard et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 229,
"end": 251,
"text": "Artetxe et al., 2018a)",
"ref_id": "BIBREF3"
},
{
"start": 268,
"end": 289,
"text": "Lample et al. (2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Code: https://github.com/ccsasuke/umwe",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Henceforth, we refer to this method as BWE-Pivot as the target language serves as a pivot to connect other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Massively multilingual word embeddings",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively multilingual word embeddings. CoRR, abs/1602.01925.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2289--2294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2289-2294, Austin, Texas. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "789--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully un- supervised cross-lingual mappings of word embed- dings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 789-798. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised neural machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural ma- chine translation. In International Conference on Learning Representations.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval-2017 task 2: Multilingual and cross-lingual semantic word similarity",
"authors": [
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Taher"
],
"last": "Pilehvar",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "15--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. Semeval- 2017 task 2: Multilingual and cross-lingual semantic word similarity. In Proceedings of the 11th Interna- tional Workshop on Semantic Evaluation (SemEval- 2017), pages 15-26, Vancouver, Canada. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Nasari: Integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2016,
"venue": "Artificial Intelligence",
"volume": "240",
"issue": "",
"pages": "36--64",
"other_ids": {
"DOI": [
"10.1016/j.artint.2016.07.005"
]
},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Nasari: Integrating ex- plicit knowledge and corpus statistics for a multilin- gual representation of concepts and entities. Artifi- cial Intelligence, 240:36-64.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adversarial deep averaging networks for cross-lingual sentiment classification",
"authors": [
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Athiwaratkun",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2016. Adversarial deep av- eraging networks for cross-lingual sentiment classi- fication. arXiv e-prints 1606.01614v5.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Parseval networks: Improving robustness to adversarial examples",
"authors": [
{
"first": "Moustapha",
"middle": [],
"last": "Cisse",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "854--863",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. 2017. Parse- val networks: Improving robustness to adversarial examples. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 854-863, International Convention Centre, Sydney, Australia. PMLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Transgram, fast cross-lingual word-embeddings",
"authors": [
{
"first": "Jocelyn",
"middle": [],
"last": "Coulmance",
"suffix": ""
},
{
"first": "Jean-Marc",
"middle": [],
"last": "Marty",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Amine",
"middle": [],
"last": "Benhalloum",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1109--1113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jocelyn Coulmance, Jean-Marc Marty, Guillaume Wenzek, and Amine Benhalloum. 2015. Trans- gram, fast cross-lingual word-embeddings. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1109- 1113, Lisbon, Portugal. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving zero-shot learning by mitigating the hubness problem",
"authors": [
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgiana Dinu and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness prob- lem. In International Conference on Learning Rep- resentations, Workshop Track.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multilingual training of crosslingual word embeddings",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Kanayama",
"suffix": ""
},
{
"first": "Tengfei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "894--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2017. Multilingual training of crosslingual word embeddings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 894-904, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving vector space word representations using multilingual correlation",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "462--471",
"other_ids": {
"DOI": [
"10.3115/v1/E14-1049"
]
},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 462-471. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generative adversarial nets",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "2672--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in Neural Information Processing Systems 27, pages 2672-2680. Curran Associates, Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bilbowa: Fast bilingual distributed representations without word alignments",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed represen- tations without word alignments. In Proceedings of the 32nd International Conference on Machine Learning.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Inducing crosslingual distributed representations of words",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Klementiev",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Binod",
"middle": [],
"last": "Bhattarai",
"suffix": ""
}
],
"year": 2012,
"venue": "The COL-ING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "1459--1474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Klementiev, Ivan Titov, and Binod Bhat- tarai. 2012. Inducing crosslingual distributed rep- resentations of words. In Proceedings of COLING 2012, pages 1459-1474, Mumbai, India. The COL- ING 2012 Organizing Committee.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic De- noyer, and Marc'Aurelio Ranzato. 2018a. Unsu- pervised machine translation using monolingual cor- pora only. In International Conference on Learning Representations.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv",
"middle": [],
"last": "Jgou",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv Jgou. 2018b. Word translation without parallel data. In Interna- tional Conference on Learning Representations.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adversarial autoencoders",
"authors": [
{
"first": "Alireza",
"middle": [],
"last": "Makhzani",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Frey",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.05644"
]
},
"num": null,
"urls": [],
"raw_text": "Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. 2015. Adversar- ial autoencoders. arXiv preprint arXiv:1511.05644.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for ma- chine translation. CoRR, abs/1309.4168.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed repre- sentations of words and phrases and their composi- tionality. In Proceedings of the 26th International Conference on Neural Information Processing Sys- tems -Volume 2, pages 3111-3119, USA. Curran Associates Inc.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Hubs in space: Popular nearest neighbors in high-dimensional data",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Radovanovi\u0107",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Nanopoulos",
"suffix": ""
},
{
"first": "Mirjana",
"middle": [],
"last": "Ivanovi\u0107",
"suffix": ""
}
],
"year": 2010,
"venue": "J. Mach. Learn. Res",
"volume": "11",
"issue": "",
"pages": "2487--2531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milo\u0161 Radovanovi\u0107, Alexandros Nanopoulos, and Mir- jana Ivanovi\u0107. 2010. Hubs in space: Popular nearest neighbors in high-dimensional data. J. Mach. Learn. Res., 11:2487-2531.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax",
"authors": [
{
"first": "L",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "H",
"middle": [
"P"
],
"last": "David",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Turban",
"suffix": ""
},
{
"first": "Nils",
"middle": [
"Y"
],
"last": "Hamblin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "On the limitations of unsupervised bilingual dictionary induction",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "778--788",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli\u0107. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778- 788. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Conceptnet at semeval-2017 task 2: Extending word embeddings with multilingual relational knowledge",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joanna",
"middle": [],
"last": "Lowry-Duda",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017",
"volume": "",
"issue": "",
"pages": "85--89",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2008"
]
},
"num": null,
"urls": [],
"raw_text": "Robert Speer and Joanna Lowry-Duda. 2017. Con- ceptnet at semeval-2017 task 2: Extending word embeddings with multilingual relational knowledge. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Van- couver, Canada, August 3-4, 2017, pages 85-89.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Word representations: A simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev-Arie",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394, Up- psala, Sweden. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Bilingual word embeddings from non-parallel documentaligned data applied to bilingual lexicon induction",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "719--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2015. Bilin- gual word embeddings from non-parallel document- aligned data applied to bilingual lexicon induction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 2: Short Papers), pages 719-725, Beijing, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Normalized word embedding and orthogonal transform for bilingual word translation",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiye",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1006--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthog- onal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1006-1011, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Adversarial training for unsupervised bilingual lexicon induction",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1959--1970",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1959-1970, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bilingual word embeddings for phrase-based machine translation",
"authors": [
{
"first": "Will",
"middle": [
"Y"
],
"last": "Zou",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1393--1398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Y. Zou, Richard Socher, Daniel Cer, and Christo- pher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1393-1398, Seattle, Washington, USA. Association for Compu- tational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Forward and backward passes when training MForward and backward passes when training D",
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td colspan=\"4\">Unsupervised methods without cross-lingual supervision</td></tr><tr><td>BWE-Pivot</td><td colspan=\"3\">74.0 82.3 81.7 77.0 80.7 71.9 66.1 68.0 57.4 81.1 79.7 74.7 81.9 85.0 78.9</td></tr><tr><td>BWE-Direct</td><td colspan=\"3\">74.0 82.3 81.7 77.0 80.7 73.0 65.7 66.5 58.5 83.1 83.0 77.9 83.3 87.3 80.5</td></tr><tr><td>MAT+MPSR</td><td colspan=\"3\">74.8 82.4 82.5 78.8 81.5 76.7 69.6 72.0 63.2 83.9 83.5 79.3 84.5 87.8 82.3</td></tr><tr><td/><td colspan=\"3\">de-en fr-en es-en it-en pt-en fr-de es-de it-de pt-de es-fr it-fr pt-fr it-es pt-es pt-it</td></tr><tr><td colspan=\"4\">Supervised methods with cross-lingual supervision</td></tr><tr><td colspan=\"4\">Sup-BWE-Direct 72.4 82.4 82.9 76.9 80.3 69.5 68.3 67.5 63.7 85.8 87.1 84.3 87.3 91.5 81.1</td></tr><tr><td colspan=\"4\">Unsupervised methods without cross-lingual supervision</td></tr><tr><td>BWE-Pivot</td><td colspan=\"3\">72.2 82.1 83.3 77.7 80.1 68.1 67.9 66.1 63.1 84.7 86.5 82.6 85.8 91.3 79.2</td></tr><tr><td>BWE-Direct</td><td colspan=\"3\">72.2 82.1 83.3 77.7 80.1 69.7 68.8 62.5 60.5 86 87.6 83.9 87.7 92.1 80.6</td></tr><tr><td>MAT+MPSR</td><td colspan=\"3\">72.9 81.8 83.7 77.4 79.9 71.2 69.0 69.5 65.7 86.9 88.1 86.3 88.2 92.7 82.6</td></tr><tr><td/><td/><td/><td>(a) Detailed Results</td></tr><tr><td/><td colspan=\"2\">Training Cost</td><td>Single Source</td><td>Single Target</td></tr><tr><td/><td colspan=\"3\">#BWEs time en-xx de-xx fr-xx es-xx it-xx pt-xx xx-en xx-de xx-fr xx-es xx-it xx-pt Overall</td></tr><tr><td colspan=\"4\">Supervised methods with cross-lingual supervision</td></tr><tr><td colspan=\"4\">Sup-BWE-Direct N (N \u22121) 4h 78.6 68.4 79.2 81.6 80.0 80.2 79.0 68.5 82.3 82.1 78.9 77.1 78.0</td></tr><tr><td colspan=\"4\">Unsupervised methods without cross-lingual supervision</td></tr><tr><td>BWE-Pivot</td><td colspan=\"3\">2(N \u22121) 8h 79.1 67.1 77.1 80.6 79.0 79.3 79.1 67.8 81.6 81.2 77.2 75.3 77.0</td></tr><tr><td>BWE-Direct</td><td colspan=\"3\">N (N \u22121) 23h 79.1 67.2 79.2 81.7 79.2 79.4 79.1 67.1 82.6 82.1 78.1 77.0 77.6</td></tr><tr><td>MAT+MPSR</td><td>N \u22121</td><td colspan=\"2\">5h 80.0 70.9 79.9 82.4 81.1 81.4 79.1 70.0 84.1 83.4 80.3 78.8 79.3</td></tr><tr><td/><td/><td/><td>(b) Summarized Results</td></tr></table>",
"text": "Supervised methods with cross-lingual supervisionSup-BWE-Direct 73.5 81.1 81.4 77.3 79.9 73.3 67.7 69.5 59.1 82.6 83.2 78.1 83.5 87.3 81.0",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table/>",
"text": "Multilingual Word Translation Results for English, German, French, Spanish, Italian and Portuguese. The reported numbers are precision@1 in percentage. All systems use the nearest neighbor under the CSLS distance for predicting the translation of a certain word.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table><tr><td>Luminoso</td><td>.769</td><td>.772</td><td>.735</td><td>.787 .747 .767 .595</td><td>.587</td><td>.634 .606</td><td>.700</td></tr><tr><td>NASARI</td><td>.594</td><td>.630</td><td>.548</td><td>.647 .557 .592 .492</td><td>.452</td><td>.466 .475</td><td>.545</td></tr><tr><td colspan=\"5\">Unsupervised methods without cross-lingual supervision</td><td/><td/><td/></tr><tr><td>BWE-Pivot</td><td>.709</td><td>.711</td><td>.703</td><td>.709 .682 .721 .672</td><td>.655</td><td>.701 .688</td><td>.695</td></tr><tr><td>BWE-Direct</td><td>.709</td><td>.711</td><td>.703</td><td>.709 .675 .726 .672</td><td>.662</td><td>.714 .695</td><td>.698</td></tr><tr><td>MAT+MPSR</td><td>.711</td><td>.712</td><td>.708</td><td>.709 .684 .730 .680</td><td>.674</td><td>.720 .709</td><td>.704</td></tr></table>",
"text": "Supervised methods with cross-lingual supervision",
"num": null,
"type_str": "table",
"html": null
}
}
}
}