ACL-OCL / Base_JSON /prefixD /json /D18 /D18-1042.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:51:18.095109Z"
},
"title": "A Discriminative Latent-Variable Model for Bilingual Lexicon Induction",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": "",
"affiliation": {},
"email": "sebastian@ruder.io"
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": "",
"affiliation": {
"laboratory": "The Computer Laboratory",
"institution": "University of Cambridge",
"location": {
"settlement": "Cambridge",
"country": "UK"
}
},
"email": "ryan.cotterell@jhu.com"
},
{
"first": "Yova",
"middle": [],
"last": "Kementchedjhieva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"settlement": "Copenhagen",
"country": "Denmark"
}
},
"email": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"settlement": "Copenhagen",
"country": "Denmark"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce a novel discriminative latentvariable model for the task of bilingual lexicon induction. Our model combines the bipartite matching dictionary prior of Haghighi et al. (2008) with a state-of-the-art embeddingbased approach. To train the model, we derive an efficient Viterbi EM algorithm. We provide empirical improvements on six language pairs under two metrics and show that the prior theoretically and empirically helps to mitigate the hubness problem. We also demonstrate how previous work may be viewed as a similarly fashioned latent-variable model, albeit with a different prior. 1 * The first two authors contributed equally. 1 The code used to run the experiments is available at https://github.com/sebastianruder/ latent-variable-vecmap.",
"pdf_parse": {
"paper_id": "D18-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce a novel discriminative latentvariable model for the task of bilingual lexicon induction. Our model combines the bipartite matching dictionary prior of Haghighi et al. (2008) with a state-of-the-art embeddingbased approach. To train the model, we derive an efficient Viterbi EM algorithm. We provide empirical improvements on six language pairs under two metrics and show that the prior theoretically and empirically helps to mitigate the hubness problem. We also demonstrate how previous work may be viewed as a similarly fashioned latent-variable model, albeit with a different prior. 1 * The first two authors contributed equally. 1 The code used to run the experiments is available at https://github.com/sebastianruder/ latent-variable-vecmap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Is there a more fundamental bilingual linguistic resource than a dictionary? The task of bilingual lexicon induction seeks to create a dictionary in a datadriven manner directly from monolingual corpora in the respective languages and, perhaps, a small seed set of translations. From a practical point of view, bilingual dictionaries have found uses in a myriad of NLP tasks ranging from machine translation (Klementiev et al., 2012) to cross-lingual named entity recognition (Mayhew et al., 2017) . In this work, we offer a probabilistic twist on the task, developing a novel discriminative latent-variable model that outperforms previous work.",
"cite_spans": [
{
"start": 408,
"end": 433,
"text": "(Klementiev et al., 2012)",
"ref_id": "BIBREF19"
},
{
"start": 476,
"end": 497,
"text": "(Mayhew et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our proposed model is a bridge between current state-of-the-art methods in bilingual lexicon induction that take advantage of word embeddings, e.g., the embeddings induced by Mikolov et al. (2013b) 's skip-gram objective, and older ideas in the literature that build explicit probabilistic models for the task. We propose a discriminative probability model, inspired by Irvine and Callison-Burch (2013) , infused with the bipartite matching dictionary prior of Haghighi et al. (2008) . However, like more recent approaches (Artetxe et al., 2017) , our model operates directly over pretrained word embeddings, induces a joint cross-lingual embedding space, and scales to large vocabulary sizes. To train our model, we derive a generalized expectationmaximization algorithm (EM; Neal and Hinton, 1998) and employ an efficient matching algorithm.",
"cite_spans": [
{
"start": 175,
"end": 197,
"text": "Mikolov et al. (2013b)",
"ref_id": "BIBREF26"
},
{
"start": 370,
"end": 402,
"text": "Irvine and Callison-Burch (2013)",
"ref_id": "BIBREF16"
},
{
"start": 461,
"end": 483,
"text": "Haghighi et al. (2008)",
"ref_id": "BIBREF14"
},
{
"start": 523,
"end": 545,
"text": "(Artetxe et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 777,
"end": 799,
"text": "Neal and Hinton, 1998)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Empirically, we experiment on three standard and three extremely low-resource language pairs. We evaluate intrinsically, comparing the quality of the induced bilingual dictionary, as well as analyzing the resulting bilingual word embeddings themselves. The latent-variable model yields gains over several previous approaches across language pairs. It also enables us to make implicit modeling assumptions explicit. To this end, we provide a reinterpretation of Artetxe et al. (2017) as a latent-variable model with an IBM Model 1-style (Brown et al., 1993) dictionary prior, which allows a clean side-by-side analytical comparison. Viewed in this light, the difference between our approach and Artetxe et al. (2017) , the strongest baseline, is whether one-to-one alignments or one-to-many alignments are admitted between the words of the languages' respective lexicons. Thus, we conclude that our hard constraint on one-to-one alignments is primarily responsible for the improvements over Artetxe et al. (2017) .",
"cite_spans": [
{
"start": 461,
"end": 482,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 536,
"end": 556,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF7"
},
{
"start": 709,
"end": 715,
"text": "(2017)",
"ref_id": "BIBREF0"
},
{
"start": 990,
"end": 1011,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bilingual lexicon induction 2 is the task of finding word-level translations between the lexicons of two languages. For instance, the German word Hund and the English word dog are roughly semantically equivalent, so the pair Hund-dog should be an entry in a German-English bilingual lexicon. The task itself comes in a variety of flavors. We consider a version of the task that only relies on monolingual corpora in the tradition of Rapp (1995) and Fung (1995) . In other words, the goal is to produce a bilingual lexicon primarily from unannotated raw text in each of the respective languages. Importantly, we avoid reliance on bitext, i.e. corpora with parallel sentences that are known translations of each other, e.g., EuroParl (Koehn, 2005) . The bitext assumption is quite common in the literature; see Ruder et al. (2018, Table 2 ) for a survey. Additionally, we will assume the existence of a small seed set of word-level translations obtained from a dictionary; we also experiment with seed sets obtained from heuristics that do not rely on the existence of linguistic resources.",
"cite_spans": [
{
"start": 433,
"end": 444,
"text": "Rapp (1995)",
"ref_id": "BIBREF30"
},
{
"start": 449,
"end": 460,
"text": "Fung (1995)",
"ref_id": "BIBREF11"
},
{
"start": 732,
"end": 745,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF20"
},
{
"start": 809,
"end": 836,
"text": "Ruder et al. (2018, Table 2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Bilingual Lexicon Induction and Word Embeddings",
"sec_num": "2"
},
{
"text": "To ease the later exposition, we will formulate the task graph-theoretically. Let src denote the source language and trg the target language. Suppose the source language src has n src word types in its lexicon V src and trg has n trg word types in its lexicon V trg . We will write v src (i) for the i th word type in src and v trg (i) for the i th word type in trg . We can view the elements of V src and V trg as sets of vertices in a graph. Now consider the bipartite set of vertices",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Theoretic Formulation",
"sec_num": "2.1"
},
{
"text": "V = V trg \u222a V src .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Theoretic Formulation",
"sec_num": "2.1"
},
{
"text": "In these terms, a bilingual lexicon is just a bipartite graph G = (E, V ) and, thus, the task of bilingual lexicon induction is a combinatorial problem: the search for a 'good' edge set E \u2286 V trg \u00d7V src . We depict such a bipartite graph in Figure 1 . In \u00a73, we will operationalize the notion of 'goodness' by assigning a weight w ij to each possible edge between V trg and V src .",
"cite_spans": [],
"ref_spans": [
{
"start": 241,
"end": 249,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Graph-Theoretic Formulation",
"sec_num": "2.1"
},
{
"text": "When the edge set E takes the form of a matching, we will denote it as m. 3 In general, we will be interested in partial matchings, where many vertices have no incident edges. We will write M for the set of all partial matchings on the bipartite graph G. The set of vertices in V trg (respectively V src ) with no incident edges will be termed u trg (respectively u src ). Note that for any matching m, we have the identity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Theoretic Formulation",
"sec_num": "2.1"
},
{
"text": "u trg = V trg \\ {i : (i, j) \u2208 m}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Theoretic Formulation",
"sec_num": "2.1"
},
{
"text": "3 A matching is an edge set where none of the edges share common vertices (West, 2000) . German is the target language and English is the source language. The ntrg = 7 German words are shown in blue and the nsrc = 6 English words are shown in green. A bipartite matching m between the two sets of vertices is also depicted. The German nodes in utrg are unmatched.",
"cite_spans": [
{
"start": 74,
"end": 86,
"text": "(West, 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Theoretic Formulation",
"sec_num": "2.1"
},
{
"text": "Word embeddings will also play a key role in our model. For the remainder of the paper, we will assume we have access to d-dimensional embeddings for each language's lexicon-for example, those provided by a standard model such as skip-gram (Mikolov et al., 2013b) . Notationally, we define the real matrices S \u2208 R d\u00d7nsrc and T \u2208 R d\u00d7ntrg . Note that in this formulation s i \u2208 R d , the i th column of S, is the word embedding corresponding to v src (i). Likewise, note that t i \u2208 R d , the i th column of T , is the word embedding corresponding to v trg (i).",
"cite_spans": [
{
"start": 240,
"end": 263,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embeddings",
"sec_num": "2.2"
},
{
"text": "The primary contribution of this paper is a novel latent-variable model for bilingual lexicon induction. The latent variable will be the edge set E, as discussed in \u00a72.1. Given pretrained embeddings for the source and target languages, arranged into the matrices S and T , we define the density",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Latent-Variable Model",
"sec_num": "3"
},
{
"text": "p(T | S) := m\u2208M p(T | S, m) \u2022 p(m) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Latent-Variable Model",
"sec_num": "3"
},
{
"text": "where, recall from \u00a72, M is the set of all bipartite matchings on the graph G and m \u2208 M is an individual matching. Note that, then, p(m) is a distribution over all bipartite matchings on G such as the matching shown in Figure 1 . We will take p(m) to be fixed as the uniform distribution for the remainder of the exposition, though more complicated distributions could be learned, of course. We further define the distribution",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 227,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "A Latent-Variable Model",
"sec_num": "3"
},
{
"text": "p \u03b8 (T | S, m) := (i,j)\u2208m p(t i | s j )\u2022 i\u2208utrg p(t i ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Latent-Variable Model",
"sec_num": "3"
},
{
"text": "Recall we write (i, j) \u2208 m to denote an edge in the matching. Furthermore, for notational simplicity, we have dropped the dependence of u trg on m. (Recall u trg = V trg \\ {i : (i, j) \u2208 m}). Next, we define the two densities present in equation 2as Gaussians:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Latent-Variable Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03b8 (t | s) := N (\u2126 s, I) (3) \u221d exp ||t \u2212 \u2126 s|| 2 2 p \u03b8 (t) := N (\u00b5, I)",
"eq_num": "(4)"
}
],
"section": "A Latent-Variable Model",
"sec_num": "3"
},
{
"text": "Given a fixed matching m, we may create matrices S m \u2208 R d\u00d7|m| and T m \u2208 R d\u00d7|m| such that the rows correspond to word vectors of matched vertices (translations under the matching m). Now, after some algebra, we see that we can rewrite",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Latent-Variable Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(i,j)\u2208m p(t i | s i ) in matrix notation: p \u03b8 (T m | S m , m) = (i,j)\u2208m p(t i | s j ) (5) \u221d (i,j)\u2208m exp ||t i \u2212 \u2126 s j || 2 2 = exp (i,j)\u2208m ||t i \u2212 \u2126 s j || 2 2 = exp ||T m \u2212 \u2126 S m || 2 F",
"eq_num": "(6)"
}
],
"section": "A Latent-Variable Model",
"sec_num": "3"
},
{
"text": "where \u2126 \u2208 R d\u00d7d is an orthogonal matrix of parameters to be learned. The result of this derivation, equation 6, will become useful during the discussion of parameter estimation in \u00a74. We define the model's parameters, to be optimized, as \u03b8 = (\u2126, \u00b5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Latent-Variable Model",
"sec_num": "3"
},
{
"text": "In the previous section, we have formulated the induction of a bilingual lexicon as the search for an edge set E, which we treat as a latent variable that we marginalize out in equation 2. Specifically, we assume that E is a partial matching. Thus, for every (i, j) \u2208 m, we have t i \u223c N (\u2126 s j , I), that is, the embedding for v trg (i) is assumed to have been drawn from a Gaussian centered around the embedding for v src (j), after an orthogonal transformation. This gives rise to two modeling assumptions, which we make explicit: (i) There exists a single source for every word in the target lexicon and that source cannot be used more than once. 4 (ii) There exists an orthogonal transformation, after which the embedding spaces are more or less equivalent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions and their Limitations",
"sec_num": null
},
{
"text": "Assumption (i) may be true for related languages, but is likely false for morphologically rich languages that have a many-to-many relationship between the words in their respective lexicons. We propose to ameliorate this using a rank constraint that only considers the top n most frequent words in both lexicons for matching in \u00a76. In addition, we experiment with priors that express different matchings in \u00a77.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions and their Limitations",
"sec_num": null
},
{
"text": "As for assumption (ii), previous work (Xing et al., 2015; Artetxe et al., 2017) has achieved some success using an orthogonal transformation; recently, however, S\u00f8gaard et al. 2018demonstrated that monolingual embedding spaces are not approximately isomorphic and that there is a complex relationship between word form and meaning, which is only inadequately modeled by current approaches, which for example cannot model polysemy. Nevertheless, we will show that imbuing our model with these assumptions helps empirically in \u00a76, giving them practical utility.",
"cite_spans": [
{
"start": 38,
"end": 57,
"text": "(Xing et al., 2015;",
"ref_id": "BIBREF40"
},
{
"start": 58,
"end": 79,
"text": "Artetxe et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions and their Limitations",
"sec_num": null
},
{
"text": "Why it Works: The Hubness Problem Why should we expect the bipartite matching prior to help, given that we know of cases when multiple source words should match a target word? One answer is because the bipartite prior helps us obviate the hubness problem, a common issue in word-embedding-based bilingual lexicon induction . The hubness problem is an intrinsic problem of high-dimensional vector spaces where certain vectors will be universal nearest neighbors, i.e. they will be the nearest neighbor to a disproportionate number of other vectors (Radovanovi\u0107 et al., 2010) . Thus, if we allow oneto-many alignments, we will find the embeddings of certain elements of V src acting as hubs, i.e. the model will pick them to generate a disproportionate number of target embeddings, which reduces the quality of the embedding space. 5 Another explanation for the positive effect of the ",
"cite_spans": [
{
"start": 547,
"end": 573,
"text": "(Radovanovi\u0107 et al., 2010)",
"ref_id": "BIBREF29"
},
{
"start": 830,
"end": 831,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions and their Limitations",
"sec_num": null
},
{
"text": "// Viterbi E-Step 3: m \u2190 argmax m\u2208M log p \u03b8 (m | S, T ) 4: u trg \u2190 V trg \\ {i : (i, j) \u2208 m } 5: // M-Step 6: U \u03a3V \u2190 SVD T m S m 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions and their Limitations",
"sec_num": null
},
{
"text": ":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions and their Limitations",
"sec_num": null
},
{
"text": "\u2126 \u2190 U V 8: \u00b5 \u2190 1 /|u trg | \u2022 i\u2208u trg t i 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions and their Limitations",
"sec_num": null
},
{
"text": "\u03b8 \u2190 (\u2126 , \u00b5 ) 10: until converged one-to-one alignment prior is its connection to the Wasserstein distance and computational optimal transport (Villani, 2008) . Concurrent work (Grave et al., 2018) similarly has found the one-to-one alignment prior to be beneficial.",
"cite_spans": [
{
"start": 142,
"end": 157,
"text": "(Villani, 2008)",
"ref_id": "BIBREF36"
},
{
"start": 176,
"end": 196,
"text": "(Grave et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions and their Limitations",
"sec_num": null
},
{
"text": "We will conduct parameter estimation through Viterbi EM. We describe first the E-step, then the M-step. Viterbi EM estimates the parameters by alternating between the two steps until convergence. We give the complete pseudocode in Algorithm 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "4"
},
{
"text": "The E-step asks us to compute the posterior of latent bipartite matchings p(m | S, T ). Computation of this distribution, however, is intractable as it would require a sum over all bipartite matchings, which is #P-hard (Valiant, 1979) . Tricks from combinatorial optimization make it possible to maximize over all bipartite matchings in polynomial time. Thus, we fall back on the Viterbi approximation for the E-step (Brown et al., 1993; Samdani et al., 2012) . The derivation will follow Haghighi et al. (2008) . In order to compute",
"cite_spans": [
{
"start": 219,
"end": 234,
"text": "(Valiant, 1979)",
"ref_id": "BIBREF35"
},
{
"start": 417,
"end": 437,
"text": "(Brown et al., 1993;",
"ref_id": "BIBREF7"
},
{
"start": 438,
"end": 459,
"text": "Samdani et al., 2012)",
"ref_id": "BIBREF32"
},
{
"start": 489,
"end": 511,
"text": "Haghighi et al. (2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Viterbi E-Step",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m = argmax m\u2208M log p \u03b8 (m | S, T )",
"eq_num": "(7)"
}
],
"section": "Viterbi E-Step",
"sec_num": "4.1"
},
{
"text": "we construct a fully connected bipartite graph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Viterbi E-Step",
"sec_num": "4.1"
},
{
"text": "G = (E, V src \u222a V trg ), where E = V src \u00d7 V trg .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Viterbi E-Step",
"sec_num": "4.1"
},
{
"text": "We weight each arc (i, j) \u2208 E with the weight between the projected source word and target word embeddings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Viterbi E-Step",
"sec_num": "4.1"
},
{
"text": "w ij = log p(t i | s j ) \u2212 log p(t i ) = ||t i \u2212 \u2126 s j || 2 2 \u2212 ||t i \u2212\u00b5|| 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Viterbi E-Step",
"sec_num": "4.1"
},
{
"text": "2 , where the normalizers of both Gaussians cancel as both have the same covariance matrix, i.e., I. Note that in the case where the t i and the s j are of length 1, that is, ||t i || 2 = ||s j || 2 = 1, and \u00b5 = 0, we recover cosine distance between the vectors up to an additive constant as orthogonal matrices preserve length (the constant is always -1 as ||t i || 2 = 1). We may ignore this constant during the E-step's combinatorial optimization. Note the optimal partial matching will contain no edges with weight w ij < 0. For this reason, we remove such edges from the bipartite graph. To find the maximal partial bipartite matching on G to compute m , we employ an efficient algorithm as detailed in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Viterbi E-Step",
"sec_num": "4.1"
},
{
"text": "We frame finding an optimal one-to-one alignment between n src source and n trg words as a combinatorial optimization problem, specifically, a linear assignment problem (LAP; Bertsimas and Tsitsiklis, 1997) . In its original formulation, the LAP requires assigning a number of agents (source words) to a number of tasks (target words) at a cost that varies based on each assignment. An optimal solution assigns each source word to exactly one target word and vice versa at minimum cost. The Hungarian algorithm (Kuhn, 1955) is one of the most well-known approaches for solving the LAP, but runs in O((n src + n trg ) 3 ). This works for smaller vocabulary sizes, 6 but is prohibitive for matching cross-lingual word embeddings with large vocabularies for real-world applications. 7 For each source word, most target words, however, are unlikely candidates for alignment. We thus propose to consider only the top k most similar target words for alignment with every source word. We sparsify the graph by weighting the edges for all other words with \u2212\u221e. The remaining weights w ij are chosen as discussed above. We employ a version of the Jonker-Volgenant algorithm (Jonker and Volgenant, 1987; Volgenant, 1996) , which has been optimized for LAP on sparse graphs, to find the maximum-weight matching m on G. 8",
"cite_spans": [
{
"start": 175,
"end": 206,
"text": "Bertsimas and Tsitsiklis, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 511,
"end": 523,
"text": "(Kuhn, 1955)",
"ref_id": "BIBREF21"
},
{
"start": 780,
"end": 781,
"text": "7",
"ref_id": null
},
{
"start": 1164,
"end": 1192,
"text": "(Jonker and Volgenant, 1987;",
"ref_id": "BIBREF17"
},
{
"start": 1193,
"end": 1209,
"text": "Volgenant, 1996)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finding a Maximal Bipartite Matching",
"sec_num": null
},
{
"text": "Next, we will describe the M-step. Given an optimal matching m computed in \u00a74.1, we search for a matrix \u2126 \u2208 R d\u00d7d . We additionally enforce the constraint that \u2126 is a real orthogonal matrix, i.e., \u2126 \u2126 = I. Previous work (Xing et al., 2015; Artetxe et al., 2017) found that the orthogonality constraint leads to noticeable improvements.",
"cite_spans": [
{
"start": 220,
"end": 239,
"text": "(Xing et al., 2015;",
"ref_id": "BIBREF40"
},
{
"start": 240,
"end": 261,
"text": "Artetxe et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "M-Step",
"sec_num": "4.2"
},
{
"text": "Our M-step optimizes two objectives independently. First, making use of the result in equation (6), we optimize the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-Step",
"sec_num": "4.2"
},
{
"text": "log p(T m | S m ,m ) (8) = ||T m \u2212 \u2126 S m || 2 F + C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-Step",
"sec_num": "4.2"
},
{
"text": "with respect to \u2126 subject to \u2126 \u2126 = I. (Note we may ignore the constant C during the optimization.) Second, we optimize the objective",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-Step",
"sec_num": "4.2"
},
{
"text": "log i\u2208utrg p(t i ) = i\u2208utrg ||t i \u2212 \u00b5|| 2 2 + D (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-Step",
"sec_num": "4.2"
},
{
"text": "with respect to the mean parameter \u00b5, which is simply an average. Note, again, we may ignore the constant D during optimization. Optimizing equation 8with respect to \u2126 is known as the orthogonal Procrustes problem (Sch\u00f6nemann, 1966; Gower and Dijksterhuis, 2004) and has a closed form solution that exploits the singular value decomposition (Horn and Johnson, 2012). Namely, we compute U \u03a3V = T m S m . Then, we directly arrive at the optimum: \u2126 = U V . Optimizing equation 9can also been done in closed form; the point which minimizes distance to the data points (thereby maximizing the log-probability) is the centroid:",
"cite_spans": [
{
"start": 214,
"end": 232,
"text": "(Sch\u00f6nemann, 1966;",
"ref_id": "BIBREF33"
},
{
"start": 233,
"end": 262,
"text": "Gower and Dijksterhuis, 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "M-Step",
"sec_num": "4.2"
},
{
"text": "\u00b5 = 1 /|utrg| \u2022 i\u2208utrg t i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-Step",
"sec_num": "4.2"
},
{
"text": "The self-training method of Artetxe et al. 2017, our strongest baseline in \u00a76, may also be interpreted as a latent-variable model in the spirit of our exposition in \u00a73. Indeed, we only need to change the edge-set prior p(m) to allow for edge sets other than those that are matchings. Specifically, a matching enforces a one-to-one alignment between types in the respective lexicons. Artetxe et al. 2017, on the other hand, allow for one-to-many alignments. We show how this corresponds to an alignment distribution that is equivalent to IBM Model 1 (Brown et al., 1993) , and that Artetxe et al. (2017) 's selftraining method is actually a form of Viterbi EM.",
"cite_spans": [
{
"start": 549,
"end": 569,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF7"
},
{
"start": 596,
"end": 602,
"text": "(2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinterpretation of Artetxe et al. (2017) as a Latent-Variable Model",
"sec_num": "5"
},
{
"text": "To formalize Artetxe et al. (2017) 's contribution as a latent-variable model, we lay down some more notation. Let A = {1, . . . , n src + 1} ntrg , where we define (n src + 1) to be none, a distinguished symbol indicating unalignment. The set A is to be interpreted as the set of all one-to-many alignments a on the bipartite vertex set V = V trg \u222a V src such that a i = j means the i th vertex in V trg is aligned to the j th vertex in V src . Note that a i = (n src + 1) = none means that the i th element of V trg is unaligned. Now, by analogy to our formulation in \u00a73, we define",
"cite_spans": [
{
"start": 28,
"end": 34,
"text": "(2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinterpretation of Artetxe et al. (2017) as a Latent-Variable Model",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(T | S) := a\u2208A p(T | S, a) \u2022 p(a)",
"eq_num": "(10)"
}
],
"section": "Reinterpretation of Artetxe et al. (2017) as a Latent-Variable Model",
"sec_num": "5"
},
{
"text": "= a\u2208A ntrg i=1 p(t i | s a i , a i ) \u2022 p(a i ) (11) = ntrg i=1 nsrc+1 a i =1 p(t i | s a i , a i ) \u2022 p(a i ) (12)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinterpretation of Artetxe et al. (2017) as a Latent-Variable Model",
"sec_num": "5"
},
{
"text": "The move from equation 11to equation 12is the dynamic-programming trick introduced in Brown et al. (1993) . This reduces the number of terms in the expression from exponentially many to polynomially many. We take p(a) to be a uniform distribution over all alignments with no parameters to be learned.",
"cite_spans": [
{
"start": 99,
"end": 105,
"text": "(1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinterpretation of Artetxe et al. (2017) as a Latent-Variable Model",
"sec_num": "5"
},
{
"text": "Step In the context of Viterbi EM, it means the max over A will decompose additively s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artetxe et al. (2017)'s Viterbi E-",
"sec_num": null
},
{
"text": "max a\u2208A log p(a | S, T ) = ntrg i=1 max 1\u2264a i \u2264(nsrc+1) log p(a i | S, T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artetxe et al. (2017)'s Viterbi E-",
"sec_num": null
},
{
"text": "thus, we can simply find a component-wise as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artetxe et al. (2017)'s Viterbi E-",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a i = argmax 1\u2264a i \u2264(nsrc+1) log p(a i | t i , s a i )",
"eq_num": "(13)"
}
],
"section": "Artetxe et al. (2017)'s Viterbi E-",
"sec_num": null
},
{
"text": "Artetxe et al. (2017) 's M-step The M-step remains unchanged from the exposition in \u00a73 with the exception that we fit \u2126 given matrices S a and T a formed from a one-to-many alignment a, rather than a matching m.",
"cite_spans": [
{
"start": 15,
"end": 21,
"text": "(2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Artetxe et al. (2017)'s Viterbi E-",
"sec_num": null
},
{
"text": "Why a Reinterpretation? The reinterpretation of Artetxe et al. 2017as a probabilistic model yields a clear analytical comparison between our method and theirs. The only difference between the two is the constraint on the bilingual lexicon that the model is allowed to induce.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artetxe et al. (2017)'s Viterbi E-",
"sec_num": null
},
{
"text": "We first conduct experiments on bilingual dictionary induction and cross-lingual word similarity on three standard language pairs, English-Italian, English-German, and English-Finnish. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Datasets For bilingual dictionary induction, we use the English-Italian dataset by and the English-German and English-Finnish datasets by Artetxe et al. (2017) . For cross-lingual word similarity, we use the RG-65 and WordSim-353 cross-lingual datasets for English-German and the WordSim-353 cross-lingual dataset for English-Italian by Camacho-Collados et al. (2015) .",
"cite_spans": [
{
"start": 138,
"end": 159,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 337,
"end": 367,
"text": "Camacho-Collados et al. (2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Details",
"sec_num": "6.1"
},
{
"text": "We follow Artetxe et al. (2017) and train monolingual embeddings with word2vec, CBOW, and negative sampling (Mikolov et al., 2013a ) on a 2.8 billion word corpus for English (ukWaC + Wikipedia + BNC), a 1.6 billion word corpus for Italian (itWaC), a 0.9 billion word corpus for German (SdeWaC), and a 2.8 billion word corpus for Finnish (Common Crawl).",
"cite_spans": [
{
"start": 25,
"end": 31,
"text": "(2017)",
"ref_id": "BIBREF0"
},
{
"start": 108,
"end": 130,
"text": "(Mikolov et al., 2013a",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Embeddings",
"sec_num": null
},
{
"text": "Seed dictionaries Following Artetxe et al. 2017, we use dictionaries of 5,000 words, 25 words, and a numeral dictionary consisting of words matching the [0-9]+ regular expression in both vocabularies. 9 In line with , we additionally use a dictionary of identically spelled strings in both vocabularies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Embeddings",
"sec_num": null
},
{
"text": "Implementation details Similar to Artetxe et al. 2017, we stop training when the improvement on the average cosine similarity for the induced dictionary is below 1 \u00d7 10 \u22126 between succeeding iterations. Unless stated otherwise, we induce a dictionary of 200,000 source and 200,000 target words as in previous work (Mikolov et al., 2013c; Artetxe et al., 2016) . For optimal 1:1 alignment, we have observed the best results by keeping the top k = 3 most similar target words. If using a rank constraint, we restrict the matching in the Estep to the top 40,000 words in both languages. 10 Finding an optimal alignment on the 200,000 \u00d7 200,000 graph takes about 25 minutes on CPU; 11 with a rank constraint, matching takes around three minutes.",
"cite_spans": [
{
"start": 314,
"end": 337,
"text": "(Mikolov et al., 2013c;",
"ref_id": "BIBREF27"
},
{
"start": 338,
"end": 359,
"text": "Artetxe et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 584,
"end": 586,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Embeddings",
"sec_num": null
},
{
"text": "Baselines We compare our approach with and without the rank constraint to the original bilingual mapping approach by Mikolov et al. (2013c) . In addition, we compare with Zhang et al. (2016) and Xing et al. (2015) who augment the former with an orthogonality constraint and normalization and an orthogonality constraint respectively. Finally, we compare with Artetxe et al. (2016) who add dimension-wise mean centering to Xing et al. (2015) , and Artetxe et al. (2017) .",
"cite_spans": [
{
"start": 117,
"end": 139,
"text": "Mikolov et al. (2013c)",
"ref_id": "BIBREF27"
},
{
"start": 171,
"end": 190,
"text": "Zhang et al. (2016)",
"ref_id": "BIBREF41"
},
{
"start": 195,
"end": 213,
"text": "Xing et al. (2015)",
"ref_id": "BIBREF40"
},
{
"start": 359,
"end": 380,
"text": "Artetxe et al. (2016)",
"ref_id": "BIBREF2"
},
{
"start": 422,
"end": 440,
"text": "Xing et al. (2015)",
"ref_id": "BIBREF40"
},
{
"start": 462,
"end": 468,
"text": "(2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Embeddings",
"sec_num": null
},
{
"text": "Both Mikolov et al. (2013c) and Artetxe et al. (2017) are special cases of our famework and comparisons to these approaches thus act as an ablation study. Specifically, Mikolov et al. (2013c) does not employ orthogonal Procrustes, but rather allows the learned matrix \u2126 to range freely. Likewise, as discussed in \u00a75, Artetxe et al. (2017) make use of a Viterbi EM style algorithm with a different prior over edge sets. 12",
"cite_spans": [
{
"start": 5,
"end": 27,
"text": "Mikolov et al. (2013c)",
"ref_id": "BIBREF27"
},
{
"start": 32,
"end": 53,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 169,
"end": 191,
"text": "Mikolov et al. (2013c)",
"ref_id": "BIBREF27"
},
{
"start": 317,
"end": 338,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Embeddings",
"sec_num": null
},
{
"text": "We show results for bilingual dictionary induction in Table 1 and for cross-lingual word similarity in Table 2 . Our method with a 1:1 prior outperforms all baselines on English-German and English-Italian. 13 Interestingly, the 1:1 prior by itself fails on English-Finnish with a 25 word and numerals seed lexicon. We hypothesize that the prior imposes too strong of a constraint to find a good solution for a distant language pair from a poor initialization. With a better-but still weakly supervised-starting point using identical strings, our approach finds a good solution. Alternatively, we can mitigate this deficiency effectively using a rank constraint, which allows our model to converge to good solutions even with a 25 word or numerals seed lexicon. The rank constraint generally performs similarly or boosts performance, while being about 8 times faster. All approaches do better with identical strings compared to numerals, indicating that the former may be generally suitable as a default weakly-supervised seed lexicon.",
"cite_spans": [
{
"start": 206,
"end": 208,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 103,
"end": 110,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "On cross-lingual word similarity, our approach yields the best performance on WordSim-353 and RG-65 for English-German and is only outperformed by Artetxe et al. (2017) on English-Italian Wordsim-353.",
"cite_spans": [
{
"start": 147,
"end": 168,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "Vocabulary sizes The beneficial contribution of the rank constraint demonstrates that in similar languages, many frequent words will have one-to-one matchings, while it may be harder to find direct matches for infrequent words. We would thus expect the latent variable model to perform better if we only learn dictionaries for the top n most frequent words in both languages. We show results for our approach in comparison to the baselines in Figure 2 for English-Italian using a 5,000 word seed lexicon across vocabularies consisting of different numbers n of the most frequent words 14 . The comparison approaches mostly perform similar, while our approach performs particularly well for the most frequent words in a language.",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 451,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7"
},
{
"text": "Different priors An advantage of having an explicit prior as part of the model is that we can experiment with priors that satisfy different assumptions. Besides the 1:1 prior, we experiment with a 2:2 prior and a 1:2 prior. For the 2:2 prior, we create copies of the source and target words V src and V trg and add these to our existing set of vertices V = (V trg +V trg , V src +V src ). We run the Viterbi E-step on this new graph G and merge matched pairs of words and their copies in the end. Similarly, for the 1:2 prior, which allows one source word to be matched to two target words, we augment the vertices with a copy of the source words V src and proceed as above. We show results for bilingual dictionary induction with different priors across different vocabulary sizes in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 785,
"end": 793,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7"
},
{
"text": "The 2:2 prior performs best for small vocabulary sizes. As solving the linear assignment problem for larger vocabularies becomes progressively more challenging, the differences between the priors become obscured and their performance converges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7"
},
{
"text": "We analyze empirically whether the prior helps with the hubness problem. Following , we define the hubness N k (y) at k of a target word y as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hubness problem",
"sec_num": null
},
{
"text": "N k (y) = |{x \u2208 Q | y \u2208 NN k (x, G)}| (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hubness problem",
"sec_num": null
},
{
"text": "where Q is a set of query source language words and NN k (x, G) denotes the k nearest neighbors (a) English-Italian (b) English-German (c) English-Finnish Figure 3 : Bilingual dictionary induction results of our method with different priors using a 5,000 word seed lexicon across different vocabulary sizes. Artetxe et al. (2017) Ours 1:1luis (20) gleichg\u00fcltigkeit -'indifference' (14) ungarischen heuchelei -'Hungarian' 18-'hypocrisy' (13) jorge 17ahmed 13mohammed 17ideologie -'ideology' (13) gewi\u00df eduardo (13) -'certainly' (17) of x in the graph G. 15 In accordance with Lazaridou et al. (2015), we set k = 20 and use the words in the evaluation dictionary as query terms. We show the target language words with the highest hubness using our method and Artetxe et al. (2017) for English-German with a 5,000 seed lexicon and the full vocabulary in Table 3 . 16 Hubs are fewer and occur less often with our method, demonstrating that the prior-to some extent-aids with resolving hubness. Interestingly, compared to , hubs seem to occur less often and are more meaningful in current cross-lingual word embedding models. 17 For instance, the neighbors of 'gleichg\u00fcltigkeit' all relate to indifference and words appearing close to 'luis' or 'jorge' are Spanish names. This suggests that the prior might also be beneficial in other ways, e.g. by enforcing more reliable translation pairs for subsequent iterations. 15 In other words, the hubness of a target word measures how often it occurs in the neighborhood of the query terms. 16 We verified that hubs are mostly consistent across runs and similar across language pairs.",
"cite_spans": [
{
"start": 308,
"end": 329,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 553,
"end": 555,
"text": "15",
"ref_id": null
},
{
"start": 772,
"end": 778,
"text": "(2017)",
"ref_id": "BIBREF0"
},
{
"start": 861,
"end": 863,
"text": "16",
"ref_id": null
},
{
"start": 1413,
"end": 1415,
"text": "15",
"ref_id": null
},
{
"start": 1530,
"end": 1532,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 3",
"ref_id": null
},
{
"start": 851,
"end": 858,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Hubness problem",
"sec_num": null
},
{
"text": "17 observed mostly rare words with N20 values of up to 50 and many with N20 > 20. Low-resource languages Cross-lingual embeddings are particularly promising for low-resource languages, where few labeled examples are typically available, but are not adequately reflected in current benchmarks (besides the English-Finnish language pair). We perform experiments with our method with and without a rank constraint and Artetxe et al. (2017) for three truly lowresource language pairs, English-{Turkish, Bengali, Hindi}. We additionally conduct an experiment for Estonian-Finnish, similarly to . For all languages, we use fastText embeddings (Bojanowski et al., 2017) trained on Wikipedia, the evaluation dictionaries provided by Conneau et al. (2018) , and a seed lexicon based on identical strings to reflect a realistic use case. We note that English does not share scripts with Bengali and Hindi, making this even more challenging. We show results in Table 4 . Surprisingly, the method by Artetxe et al. (2017) is unable to leverage the weak supervision and fails to converge to a good solution for English-Bengali and English-Hindi. 18 Our method without a rank constraint significantly outperforms Artetxe et al. (2017) , while particularly for English-Hindi the rank constraint dramatically boosts performance.",
"cite_spans": [
{
"start": 430,
"end": 436,
"text": "(2017)",
"ref_id": "BIBREF0"
},
{
"start": 637,
"end": 662,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 725,
"end": 746,
"text": "Conneau et al. (2018)",
"ref_id": "BIBREF9"
},
{
"start": 988,
"end": 1009,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 1133,
"end": 1135,
"text": "18",
"ref_id": null
},
{
"start": 1214,
"end": 1220,
"text": "(2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 950,
"end": 957,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Hubness problem",
"sec_num": null
},
{
"text": "Error analysis To illustrate the types of errors the model of Artetxe et al. (2017) and our method with a rank constraint make, we query both of them with words from the test dictionary of Artetxe et al. (2017) in German and seek their nearest neighbours in the English embedding space. P@1 over the German-English test set is 36.38 and 39.18 for Artetxe et al. (2017) and our method respectively. We show examples where nearest neighbours of the methods differ in Table 5 . Similar to Kementchedjhieva et al. (2018) , we find that morphologically related words are often the source of mistakes. Other common sources of mistakes in this dataset are names that are translated to different names and nearly synonymous words being predicted. Both of these sources indicate that while the learned alignment is generally good, it is often not sufficiently precise.",
"cite_spans": [
{
"start": 62,
"end": 83,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 204,
"end": 210,
"text": "(2017)",
"ref_id": "BIBREF0"
},
{
"start": 362,
"end": 368,
"text": "(2017)",
"ref_id": "BIBREF0"
},
{
"start": 486,
"end": 516,
"text": "Kementchedjhieva et al. (2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 465,
"end": 472,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Hubness problem",
"sec_num": null
},
{
"text": "Cross-lingual embedding priors Haghighi et al. (2008) first proposed an EM self-learning method for bilingual lexicon induction, representing words with orthographic and context features and using the Hungarian algorithm in the E-step to find an optimal 1:1 matching. Artetxe et al. 2017proposed a similar self-learning method that uses word embeddings, with an implicit one-to-many alignment based on nearest neighbor queries. Vuli\u0107 and Korhonen (2016) proposed a more strict one-to-many alignment based on symmetric translation pairs, which is also used by Conneau et al. (2018) . Our method bridges the gap between early latent variable and word embedding-based approaches and explicitly allows us to reason over its prior.",
"cite_spans": [
{
"start": 31,
"end": 53,
"text": "Haghighi et al. (2008)",
"ref_id": "BIBREF14"
},
{
"start": 428,
"end": 453,
"text": "Vuli\u0107 and Korhonen (2016)",
"ref_id": "BIBREF38"
},
{
"start": 559,
"end": 580,
"text": "Conneau et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "8"
},
{
"text": "Hubness problem The hubness problem is an intrinsic problem in high-dimensional vector spaces (Radovanovi\u0107 et al., 2010) . first observed it for cross-lingual embedding spaces and proposed to address it by re-ranking neighbor lists. proposed a max-marging objective as a solution, while more recent approaches proposed to modify the nearest neighbor retrieval by inverting the softmax (Smi, 2017) or scaling the similarity values (Conneau et al., 2018) .",
"cite_spans": [
{
"start": 94,
"end": 120,
"text": "(Radovanovi\u0107 et al., 2010)",
"ref_id": "BIBREF29"
},
{
"start": 385,
"end": 396,
"text": "(Smi, 2017)",
"ref_id": null
},
{
"start": 430,
"end": 452,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "8"
},
{
"text": "We have presented a novel latent-variable model for bilingual lexicon induction, building on the work of Artetxe et al. (2017) . Our model combines the prior over bipartite matchings inspired by Haghighi et al. (2008) and the discriminative, rather than generative, approach inspired by Irvine and Callison-Burch (2013) . We show empirical gains on six language pairs and theoretically and empirically demonstrate the application of the bipartite matching prior to solving the hubness problem.",
"cite_spans": [
{
"start": 105,
"end": 126,
"text": "Artetxe et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 195,
"end": 217,
"text": "Haghighi et al. (2008)",
"ref_id": "BIBREF14"
},
{
"start": 287,
"end": 319,
"text": "Irvine and Callison-Burch (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "For the purposes of this paper, we use bilingual lexicon and (bilingual) dictionary synonymously. On the other hand, unmodified lexicon always refers to a word list in a single language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is true by the definition of a matching.5 In \u00a75, we discuss the one-to-many alignment used in several of our baseline systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Haghighi et al. (2008) use the Hungarian algorithm to find a matching between 2000 source and target language words.7 For reference, in \u00a76, we learn bilingual lexicons between embeddings of 200,000 source and target language words.8 After acceptance to EMNLP 2018, Edouard Grave pointed out that Sinkhorn propagation(Adams and Zemel, 2011;Mena et al., 2018) may have been a computationally more effective manner to deal with the latent matchings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The resulting dictionaries contain 2772, 2148, and 2345 entries for English-{Italian, German, Finnish} respectively.10 We validated both values with identical strings using the 5,000 word lexicon as validation set on English-Italian.11 Training takes a similar amount of time as(Artetxe et al., 2017) due to faster convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Other recent improvements such as symmetric reweighting(Artetxe et al., 2018) are orthogonal to our method, which is why we do not explicitly compare to them here.13 Note that results are not directly comparable to(Conneau et al., 2018) due to the use of embeddings trained on different monolingual corpora (WaCKy vs. Wikipedia).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We only use the words in the 5,000 word seed lexicon that are contained in the n most frequent words. We do not show results for the 25 word seed lexicon and numerals as they are not contained in the smallest n of most frequent words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One possible explanation is that Artetxe et al.(2017) particularly rely on numerals, which are normalized in the fastText embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors acknowledge Edouard Grave ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bilingual word vectors, orthogonal transformations and the inverted softmax",
"authors": [],
"year": null,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bilingual word vectors, orthogonal transforma- tions and the inverted softmax. In Proceedings of ICLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-16)",
"volume": "",
"issue": "",
"pages": "2289--2294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP- 16), pages 2289-2294.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics, pages 451-462.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generalizing and Improving Bilingual Word Embedding Mappings with a Multi-Step Framework of Linear Transformations",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Generalizing and Improving Bilingual Word Embed- ding Mappings with a Multi-Step Framework of Lin- ear Transformations. In Proceedings of AAAI 2018.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Introduction to linear optimization",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bertsimas",
"suffix": ""
},
{
"first": "J",
"middle": [
"N"
],
"last": "Tsitsiklis",
"suffix": ""
}
],
"year": 1997,
"venue": "Athena Scientific",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Bertsimas and J.N. Tsitsiklis. 1997. Introduction to linear optimization. Athena Scientific.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Enriching Word Vectors with Subword Information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Pa- rameter estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Framework for the Construction of Monolingual and Cross-lingual Word Similarity Datasets",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A Framework for the Construction of Monolingual and Cross-lingual Word Similarity Datasets. Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Short Papers), (April 2016).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Word Translation Without Parallel Data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word Translation Without Parallel Data. In Proceed- ings of ICLR 2018.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving Zero-Shot Learning by Mitigating the Hubness Problem",
"authors": [
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "Workshop track",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgiana Dinu, Angeliki Lazaridou, and Marco Ba- roni. 2015. Improving Zero-Shot Learning by Miti- gating the Hubness Problem. ICLR 2015 Workshop track, pages 1-10.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Compiling bilingual lexicon entries from a non-parallel english-chinese corpus",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 1995,
"venue": "Third Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung. 1995. Compiling bilingual lexicon en- tries from a non-parallel english-chinese corpus. In Third Workshop on Very Large Corpora.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Procrustes problems",
"authors": [
{
"first": "C",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Garmt",
"middle": [
"B"
],
"last": "Gower",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dijksterhuis",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John C. Gower and Garmt B. Dijksterhuis. 2004. Pro- crustes problems. Oxford University Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised Alignment of Embeddings with Wasserstein Procrustes",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Berthet",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.11222"
]
},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Armand Joulin, and Quentin Berthet. 2018. Unsupervised Alignment of Embeddings with Wasserstein Procrustes. arXiv preprint arXiv:1805.11222.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning Bilingual Lexicons from Monolingual Corpora",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL 2008",
"volume": "",
"issue": "",
"pages": "771--779",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning Bilingual Lexicons from Monolingual Corpora. In Proceedings of ACL 2008, June, pages 771-779.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Matrix Analysis",
"authors": [
{
"first": "A",
"middle": [],
"last": "Roger",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"R"
],
"last": "Horn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger A. Horn and Charles R. Johnson. 2012. Matrix Analysis. Cambridge University Press.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Supervised bilingual lexicon induction with multiple monolingual signals",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Irvine",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "518--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Irvine and Chris Callison-Burch. 2013. Su- pervised bilingual lexicon induction with multiple monolingual signals. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 518-523, Atlanta, Georgia. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A shortest augmenting path algorithm for dense and sparse linear assignment problems",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Jonker",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Volgenant",
"suffix": ""
}
],
"year": 1987,
"venue": "Computing",
"volume": "38",
"issue": "4",
"pages": "325--340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Jonker and Anton Volgenant. 1987. A shortest augmenting path algorithm for dense and sparse lin- ear assignment problems. Computing, 38(4):325- 340.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Generalizing Procrustes Analysis for Better Bilingual Dictionary Induction",
"authors": [
{
"first": "Yova",
"middle": [],
"last": "Kementchedjhieva",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yova Kementchedjhieva, Sebastian Ruder, Ryan Cot- terell, and Anders S\u00f8gaard. 2018. Generalizing Pro- crustes Analysis for Better Bilingual Dictionary In- duction. In Proceedings of CoNLL 2018.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Toward statistical machine translation without parallel corpora",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Klementiev",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Irvine",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "130--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Klementiev, Ann Irvine, Chris Callison- Burch, and David Yarowsky. 2012. Toward statisti- cal machine translation without parallel corpora. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Lin- guistics, pages 130-140, Avignon, France. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "MT Summit",
"volume": "5",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit, vol- ume 5, pages 79-86.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Hungarian method for the assignment problem",
"authors": [
{
"first": "Harold",
"middle": [
"W"
],
"last": "Kuhn",
"suffix": ""
}
],
"year": 1955,
"venue": "Naval Research Logistics (NRL)",
"volume": "2",
"issue": "",
"pages": "83--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harold W. Kuhn. 1955. The Hungarian method for the assignment problem. Naval Research Logistics (NRL), 2(1-2):83-97.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Hubness and Pollution: Delving into Cross-Space Mapping for Zero-Shot Learning",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "270--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Georgiana Dinu, and Marco Ba- roni. 2015. Hubness and Pollution: Delving into Cross-Space Mapping for Zero-Shot Learning. Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing, pages 270-280.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Cheap translation for cross-lingual named entity recognition",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Mayhew",
"suffix": ""
},
{
"first": "Chen-Tse",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2536--2545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2536-2545, Copenhagen, Denmark. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning latent permutations with Gumbel-Sinkhorn networks",
"authors": [
{
"first": "Gonzalo",
"middle": [],
"last": "Mena",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Belanger",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Linderman",
"suffix": ""
},
{
"first": "Jasper",
"middle": [],
"last": "Snoek",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.08665"
]
},
"num": null,
"urls": [],
"raw_text": "Gonzalo Mena, David Belanger, Scott Linderman, and Jasper Snoek. 2018. Learning latent permutations with Gumbel-Sinkhorn networks. arXiv preprint arXiv:1802.08665.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Distributed Representations of Words and Phrases and their Compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Distributed Representations of Words and Phrases and their Compositionality. In Ad- vances in Neural Information Processing Systems.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Learning Representations (ICLR) Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Efficient estimation of word represen- tations in vector space. In International Conference on Learning Representations (ICLR) Workshop.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Exploiting Similarities among Languages for Machine Translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013c. Exploiting Similarities among Languages for Ma- chine Translation.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A new view of the EM algorithm that justifies incremental, sparse and other variants",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Neal",
"suffix": ""
},
{
"first": "G",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 1998,
"venue": "Learning in Graphical Models",
"volume": "",
"issue": "",
"pages": "355--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. M. Neal and G. E. Hinton. 1998. A new view of the EM algorithm that justifies incremental, sparse and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355-368. Kluwer Aca- demic Publishers.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Hubs in space: Popular nearest neighbors in high-dimensional data",
"authors": [
{
"first": "Milos",
"middle": [],
"last": "Radovanovi\u0107",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Nanopoulos",
"suffix": ""
},
{
"first": "Mirjana",
"middle": [],
"last": "Ivanovic",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milos Radovanovi\u0107, Alexandros Nanopoulos, and Mir- jana Ivanovic. 2010. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Ma- chine Learning Research, 11.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Identifying word translations in non-parallel texts",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Rapp",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "320--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the 33rd Annual Meeting of the Association for Computa- tional Linguistics, pages 320-322, Cambridge, Mas- sachusetts, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A survey of cross-lingual word embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2018. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Unified expectation maximization",
"authors": [
{
"first": "Rajhans",
"middle": [],
"last": "Samdani",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "688--698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajhans Samdani, Ming-Wei Chang, and Dan Roth. 2012. Unified expectation maximization. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 688-698, Montr\u00e9al, Canada. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A generalized solution of the orthogonal Procrustes problem",
"authors": [
{
"first": "H",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sch\u00f6nemann",
"suffix": ""
}
],
"year": 1966,
"venue": "Psychometrika",
"volume": "31",
"issue": "1",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter H. Sch\u00f6nemann. 1966. A generalized solution of the orthogonal Procrustes problem. Psychometrika, 31(1):1-10.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "On the Limitations of Unsupervised Bilingual Dictionary Induction",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli\u0107. 2018. On the Limitations of Unsupervised Bilingual Dictionary Induction. In Proceedings of ACL 2018.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The complexity of computing the permanent",
"authors": [
{
"first": "Leslie",
"middle": [
"G"
],
"last": "Valiant",
"suffix": ""
}
],
"year": 1979,
"venue": "Theoretical Computer Science",
"volume": "8",
"issue": "2",
"pages": "189--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leslie G. Valiant. 1979. The complexity of comput- ing the permanent. Theoretical Computer Science, 8(2):189-201.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Optimal transport: Old and new",
"authors": [
{
"first": "C\u00e9dric",
"middle": [],
"last": "Villani",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "338",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00e9dric Villani. 2008. Optimal transport: Old and new, volume 338. Springer Science & Business Media.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Linear and semi-assignment problems: a core oriented approach",
"authors": [
{
"first": "",
"middle": [],
"last": "Volgenant",
"suffix": ""
}
],
"year": 1996,
"venue": "Computers & Operations Research",
"volume": "23",
"issue": "10",
"pages": "917--932",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Volgenant. 1996. Linear and semi-assignment prob- lems: a core oriented approach. Computers & Oper- ations Research, 23(10):917-932.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "On the Role of Seed Lexicons in Learning Bilingual Word Embeddings",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "247--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Anna Korhonen. 2016. On the Role of Seed Lexicons in Learning Bilingual Word Embed- dings. Proceedings of ACL, pages 247-257.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Normalized Word Embedding and Orthogonal Transform for Bilingual Word Translation",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yiye",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "1005--1010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Xing, Chao Liu, Dong Wang, and Yiye Lin. 2015. Normalized Word Embedding and Orthog- onal Transform for Bilingual Word Translation. NAACL-2015, pages 1005-1010.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Ten Pairs to Tag Multilingual POS Tagging via Coarse Mapping between Embeddings",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gaddy",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT 2016",
"volume": "",
"issue": "",
"pages": "1307--1317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten Pairs to Tag Multi- lingual POS Tagging via Coarse Mapping between Embeddings. In Proceedings of NAACL-HLT 2016, pages 1307-1317.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Partial lexicons of German and English shown as a bipartite graph.",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Bilingual dictionary induction results of our method and baselines for English-Italian with a 5,000 word seed lexicon across different vocabulary sizes.",
"num": null
},
"TABREF1": {
"text": "Artetxe et al. (2017) 39.67 37.27 39.40 39.97 40.87 39.60 40.27 40.67 28.72 28.16 26.47 27.88 , rank constr.) 42.47 41.13 41.40 41.80 41.93 42.40 41.93 41.47 28.23 27.04 27.60 27.81",
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">English-Italian</td><td/><td colspan=\"2\">English-German</td><td/><td colspan=\"2\">English-Finnish</td></tr><tr><td/><td>5,000</td><td>25</td><td>num</td><td>iden 5,000</td><td>25</td><td>num</td><td>iden 5,000</td><td>25</td><td>num</td><td>iden</td></tr><tr><td colspan=\"4\">Mikolov et al. (2013c) 34.93 00.00 0.00</td><td colspan=\"2\">1.87 35.00 0.00</td><td colspan=\"3\">0.07 19.20 25.91 0.00</td><td>0.00</td><td>7.02</td></tr><tr><td>Xing et al. (2015)</td><td colspan=\"2\">36.87 0.00</td><td colspan=\"3\">0.13 27.13 41.27 0.07</td><td colspan=\"3\">0.53 38.13 28.23 0.07</td><td colspan=\"2\">0.56 17.95</td></tr><tr><td>Zhang et al. (2016)</td><td colspan=\"2\">36.73 0.07</td><td colspan=\"3\">0.27 28.07 40.80 0.13</td><td colspan=\"3\">0.87 38.27 28.16 0.14</td><td colspan=\"2\">0.42 17.56</td></tr><tr><td>Artetxe et al. (2016)</td><td colspan=\"2\">39.27 0.07</td><td colspan=\"3\">0.40 31.07 41.87 0.13</td><td colspan=\"3\">0.73 41.53 30.62 0.21</td><td colspan=\"2\">0.77 22.61</td></tr><tr><td>Ours (1:1)</td><td colspan=\"8\">41.00 39.63 40.47 41.07 42.60 42.40 42.60 43.20 29.78 0.07</td><td colspan=\"2\">3.02 29.76</td></tr><tr><td>Ours (1:1</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null
},
"TABREF2": {
"text": "Precision at 1 (P@1) scores for bilingual lexicon induction of different models with different seed dictionaries and languages on the full vocabulary.",
"html": null,
"content": "<table><tr><td/><td/><td>en-it</td><td>en-de</td></tr><tr><td/><td colspan=\"3\">Dict WS RG WS</td></tr><tr><td>Mikolov et al. (2013c)</td><td>5k</td><td colspan=\"2\">.627 .643 .528</td></tr><tr><td>Xing et al. (2015)</td><td>5k</td><td colspan=\"2\">.614 .700 .595</td></tr><tr><td>Zhang et al. (2016)</td><td>5k</td><td colspan=\"2\">.616 .704 .596</td></tr><tr><td>Artetxe et al. (2016)</td><td>5k</td><td colspan=\"2\">.617 .716 .597</td></tr><tr><td/><td>5k</td><td colspan=\"2\">.624 .742 .616</td></tr><tr><td>Artetxe et al. (2017)</td><td>25</td><td colspan=\"2\">.626 .749 .612</td></tr><tr><td/><td colspan=\"3\">num .628 .739 .604</td></tr><tr><td/><td>5k</td><td colspan=\"2\">.621 .733 .618</td></tr><tr><td>Ours (1:1)</td><td>25</td><td colspan=\"2\">.621 .740 .617</td></tr><tr><td/><td colspan=\"3\">num .624 .743 .617</td></tr><tr><td/><td>5k</td><td colspan=\"2\">.623 .741 .609</td></tr><tr><td>Ours (1:1, rank constr.)</td><td>25</td><td colspan=\"2\">.622 .753 .609</td></tr><tr><td/><td colspan=\"3\">num .625 .755 .611</td></tr><tr><td colspan=\"4\">Table 2: Spearman correlations on English-Italian and</td></tr><tr><td colspan=\"4\">English-German cross-lingual word similarity datasets.</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF3": {
"text": "Hubs in English-German cross-lingual embedding space with degree of hubness. Non-name tokens are translated.",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF5": {
"text": "Bilingual dictionary induction results for English-{Turkish, Bengali, Hindi} and Estonian-Finnish.",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF7": {
"text": "Example translations for German-English.",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}