| { |
| "paper_id": "Q18-1014", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:10:48.143704Z" |
| }, |
| "title": "Unsupervised Word Mapping Using Structural Similarities in Monolingual Embeddings", |
| "authors": [ |
| { |
| "first": "Hanan", |
| "middle": [], |
| "last": "Aldarmaki", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The George Washington University", |
| "location": {} |
| }, |
| "email": "aldarmaki@gwu.edu" |
| }, |
| { |
| "first": "Mahesh", |
| "middle": [], |
| "last": "Mohan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The George Washington University", |
| "location": {} |
| }, |
| "email": "mahesh_mohan@gwu.edu" |
| }, |
| { |
| "first": "Mona", |
| "middle": [], |
| "last": "Diab", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The George Washington University", |
| "location": {} |
| }, |
| "email": "mtdiab@gwu.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Most existing methods for automatic bilingual dictionary induction rely on prior alignments between the source and target languages, such as parallel corpora or seed dictionaries. For many language pairs, such supervised alignments are not readily available. We propose an unsupervised approach for learning a bilingual dictionary for a pair of languages given their independently-learned monolingual word embeddings. The proposed method exploits local and global structures in monolingual vector spaces to align them such that similar words are mapped to each other. We show empirically that the performance of bilingual correspondents that are learned using our proposed unsupervised method is comparable to that of using supervised bilingual correspondents from a seed dictionary.", |
| "pdf_parse": { |
| "paper_id": "Q18-1014", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Most existing methods for automatic bilingual dictionary induction rely on prior alignments between the source and target languages, such as parallel corpora or seed dictionaries. For many language pairs, such supervised alignments are not readily available. We propose an unsupervised approach for learning a bilingual dictionary for a pair of languages given their independently-learned monolingual word embeddings. The proposed method exploits local and global structures in monolingual vector spaces to align them such that similar words are mapped to each other. We show empirically that the performance of bilingual correspondents that are learned using our proposed unsupervised method is comparable to that of using supervised bilingual correspondents from a seed dictionary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The working hypothesis in distributional semantics is that the meaning of a word can be inferred by its distribution, or co-occurrence, around other words. The validity of this hypothesis is most evident in the performance of distributed vector representations of words, i.e word embeddings, that are automatically induced from large text corpora (Bengio et al., 2003; Mikolov et al., 2013b) . The qualitative nature of these embeddings can be demonstrated through empirical evidence of regularities that reflect certain semantic relationships. Words in the vector space are generally clustered by meaning, and the distances between words and clusters reflect semantic or syntactic relationships, which makes it possible to perform arithmetic on word vectors for analogical reasoning and semantic composition (Mikolov et al., 2013b) . For example, in a vector space V where f = V (''f rance\"), p = V (''paris\"), and g = V (''germany\"), the distance f \u2212 p reflects the country-capital relationship, and g + f \u2212 p results in a vector closest to V (''berlin\"). Named entities and inflectional morphemes are particularly amenable to vector arithmetic, while derivational morphology, polysemy, and other nuanced semantic categories result in lower performance in analogy questions (Finley et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 347, |
| "end": 368, |
| "text": "(Bengio et al., 2003;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 369, |
| "end": 391, |
| "text": "Mikolov et al., 2013b)", |
| "ref_id": null |
| }, |
| { |
| "start": 809, |
| "end": 832, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": null |
| }, |
| { |
| "start": 1276, |
| "end": 1297, |
| "text": "(Finley et al., 2017)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The extent of these semantic and syntactic regularities is difficult to assess intrinsically, and the performance in analogical reasoning can be partially attributed to the clustering of the words in question (Linzen, 2016) . If meaning is encoded in the relative distances among word vectors, then the structure within vector spaces should be consistent across different languages given that the datasets used to build them express similar content. In Rapp (1995) , a simulation study showed that similarity in word co-occurrence patterns within unrelated German and English texts is correlated with the number of corresponding word positions in the monolingual cooccurrence matrices. More recently, Mikolov et al. (2013a) showed that a linear projection can be learned to transform word embeddings from one language into the vector space of another using a medium-size seed dictionary, which demonstrates that the multilingual vector spaces are at least related by a linear transform. This makes it possible to align word embeddings of different languages in order to be directly comparable within the same seman-tic space. Such cross-lingual word embeddings can be used to expand dictionaries or to learn languageindependent classifiers.", |
| "cite_spans": [ |
| { |
| "start": 209, |
| "end": 223, |
| "text": "(Linzen, 2016)", |
| "ref_id": null |
| }, |
| { |
| "start": 453, |
| "end": 464, |
| "text": "Rapp (1995)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 701, |
| "end": 723, |
| "text": "Mikolov et al. (2013a)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A number of methods have been proposed recently for learning cross-lingual word embeddings with various degrees of supervision, ranging from word-level alignment using bilingual dictionaries (Ammar et al., 2016) , sentence-level alignment using parallel corpora (Gouws et al., 2015; Klementiev et al., 2012) , or document alignment using crosslingual topic models (Vuli\u0107 and Moens, 2015; Vuli\u0107 and Moens, 2012) . Using such alignments, especially large parallel corpora or sizable dictionaries, high-quality bilingual embeddings can be obtained (Upadhyay et al., 2016) . In addition, a number of methods have been proposed for expanding dictionaries using a small initial dictionary with as few as a hundred entries (Haghighi et al., 2008) .", |
| "cite_spans": [ |
| { |
| "start": 191, |
| "end": 211, |
| "text": "(Ammar et al., 2016)", |
| "ref_id": null |
| }, |
| { |
| "start": 262, |
| "end": 282, |
| "text": "(Gouws et al., 2015;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 283, |
| "end": 307, |
| "text": "Klementiev et al., 2012)", |
| "ref_id": null |
| }, |
| { |
| "start": 364, |
| "end": 387, |
| "text": "(Vuli\u0107 and Moens, 2015;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 388, |
| "end": 410, |
| "text": "Vuli\u0107 and Moens, 2012)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 545, |
| "end": 568, |
| "text": "(Upadhyay et al., 2016)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 716, |
| "end": 739, |
| "text": "(Haghighi et al., 2008)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, such alignments are not available for all languages and dialects, and while a small dictionary might be feasible to acquire, discovering word mappings with no prior knowledge whatsoever is valuable. Intuitively, if the monolingual corpora express similar aspects of the world, there should be enough structure within the vector space of each language to recover the mappings in a completely unsupervised manner. In this paper, we propose a novel approach for learning a transformation between monolingual word embeddings without the use of prior alignments. We show empirically that we can recover mappings with high accuracy in two language pairs: a close language pair, French-English; and a distant language pair, Arabic-English. The proposed method relies on the consistent regularities within monolingual vector spaces of different languages. We extract initial mappings using spectral embeddings that encode the local geometry around each word, and we use these tentative pairs to seed a greedy algorithm which minimizes the differences in global pair-wise distances among word vectors. The retrieved mappings are then used to fit a linear projection matrix to transform word embeddings from the source to the target language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Few models have been proposed for extracting dictionaries or learning bilingual embeddings without the use of any prior alignment. For languages that share orthographic similarities, lexical features such as the normalized edit distance between source and target words can be used to extract a seed lexicon for bootstrapping the bilingual dictionary induction process (Hauer et al., 2017) . In (Diab and Finch, 2000) , unsupervised mappings were extracted by preserving pairwise distances between word co-occurrence representations from two comparable corpora. The model was only evaluated mono-lingually, where two sections of a corpus were used for collecting co-occurrence statistics separately, and an iterative training algorithm was then used to retrieve the mapping of English words to themselves. Only punctuation marks were used to seed the learning and high accuracy results were reported. However, the method was not evaluated cross-lingually. We observed experimentally that punctuation marks-and function words in general-are insufficient to map words cross-lingually since they have different distributional profiles in different languages due to their predominant syntactic role.", |
| "cite_spans": [ |
| { |
| "start": 368, |
| "end": 388, |
| "text": "(Hauer et al., 2017)", |
| "ref_id": null |
| }, |
| { |
| "start": 394, |
| "end": 416, |
| "text": "(Diab and Finch, 2000)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Another unsupervised approach has been recently proposed using adversarial autoencoders (Barone, 2016) where a transformation is learned without a seed by matching the distribution of the source word embeddings with the target distribution. Preliminary investigation showed some correct mappings but the results were not comparable to supervised methods. Recent efforts using carefully-tuned adversarial methods report encouraging results comparable to supervised methods (Zhang et al., 2017; Conneau et al., 2017) . In Kiela et al. (2015) , bilingual lexicon induction is achieved by matching visual features extracted from images that correspond to each word using a convolutional neural network. The imagebased approach performs particularly well for words that express concrete rather than abstract concepts, and provides a convenient alternative to linguistic supervision when corresponding images are available.", |
| "cite_spans": [ |
| { |
| "start": 472, |
| "end": 492, |
| "text": "(Zhang et al., 2017;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 493, |
| "end": 514, |
| "text": "Conneau et al., 2017)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 520, |
| "end": 539, |
| "text": "Kiela et al. (2015)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "The unsupervised mapping problem arises in other contexts where an optimal alignment between two isomorphic point sets is sought. In image registration and shape recognition, various efficient methods can be used to find an optimal alignment between two sets of low-dimensional points that correspond to images with various degrees of deformation (Myronenko and Song, 2010; Chi et al., 2008) . In manifold learning, two sets of related high-dimensional points are projected into a shared lower dimensional space where the points can be compared and mapped to one other, such as the alignment of isomorphic protein structures (Wang and Mahadevan, 2009) and cross-lingual document alignment with unsupervised topic models (Diaz and Metzler, 2007; Wang and Mahadevan, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 347, |
| "end": 373, |
| "text": "(Myronenko and Song, 2010;", |
| "ref_id": null |
| }, |
| { |
| "start": 374, |
| "end": 391, |
| "text": "Chi et al., 2008)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 625, |
| "end": 651, |
| "text": "(Wang and Mahadevan, 2009)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 720, |
| "end": 744, |
| "text": "(Diaz and Metzler, 2007;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 745, |
| "end": 770, |
| "text": "Wang and Mahadevan, 2008)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "In the skip-gram model presented in Mikolov et al. (2013b), a feed-forward neural network is trained to maximize the probability of all words within a fixed window around a given word. Formally, given a word w in a vocabulary W , the objective of the skip-gram model is to maximize the following loglikelihood:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background 2.1 Skip-gram Word Embeddings with Subword Features", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2211 c\u2208Cw log p(c|w)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background 2.1 Skip-gram Word Embeddings with Subword Features", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where C w is the set of words in the context of w. The words are represented as one-hot vectors of size |W | that are projected into dense vectors of size d. Over a large corpus, the d-dimensional word projections encode semantic and syntactic features that are not only useful for maximizing the above probability, but also serve as general-purpose representations for words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background 2.1 Skip-gram Word Embeddings with Subword Features", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In Bojanowski et al. (2017) , a word vector is represented as the sum of its character n-grams which helps account for inflectional variations within a language, especially for morphologically rich languages where less frequent inflections are less likely to have good representations using only word-level features. Using n-grams helps account for lexical similarities among words within the same language; independently-learned embeddings with no explicit alignment would still have unrelated n-gram representations even if the languages share lexical similarities. We will refer to this model as the subword skip-gram.", |
| "cite_spans": [ |
| { |
| "start": 3, |
| "end": 27, |
| "text": "Bojanowski et al. (2017)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background 2.1 Skip-gram Word Embeddings with Subword Features", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given word embeddings in two languages X and Y , and a dictionary of (source, target) word pairs with embeddings x s and y t , respectively, a transformation matrix T , such that y t = T x s , can be estimated with various degrees of accuracy (Mikolov et al., 2013a) . Large, accurate dictionaries result in better transformations, but a good fit can also be obtained using a few thousand word pairs even in the presence of noise (see Section 4.3.4 for an empirical demonstration). Formally, given a dictionary of n word pairs,", |
| "cite_spans": [ |
| { |
| "start": 243, |
| "end": 266, |
| "text": "(Mikolov et al., 2013a)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear Transformation of Word Embeddings", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "(x i , M (x i ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear Transformation of Word Embeddings", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": ", where i = 1, ..., n, and M is a mapping from X to Y , the linear transformation matrixT is learned by minimizing the following cost function", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear Transformation of Word Embeddings", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "T = arg min T n \u2211 i=1 \u2225 T x i \u2212 M (x i ) \u2225 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear Transformation of Word Embeddings", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "( 1)After learningT , the translation of new source words can be retrieved by transforming the word vector first, then finding its nearest neighbor in the target vocabulary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear Transformation of Word Embeddings", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Learning an accurate transformation between word embeddings as described in Section 2.2 requires a seed dictionary of reasonable size. We propose a method that bypasses this requirement by learning to align the monolingual embeddings in an unsupervised manner. The underlying assumption is that word embeddings across different languages share similar local and global structures that characterize language-independent semantic features. For example, the distance between the words monday and week in English should be relatively similar to the distance between lundi and semaine in French. We attempt to recover the correspondences between different languages by exploiting these structural similarities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Word Mapping", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our approach consists of two main steps. In the first step (Section 3.1), we extract initial mappings using spectral features that encode the geometry of the local neighborhood around a point in the vector space. In the second step (Section 3.2), we iteratively refine the correspondences using a greedy optimization algorithm, which we refer to as Iterative Mapping (IM). IM is a variation on the word mapping model in Diab and Finch (2000) . The model does not make language-specific assumptions, making it suitable for learning cross-lingual correspondences. We then use these correspondences to learn a linear transformation between the source and target embeddings, as described in Section 2.2.", |
| "cite_spans": [ |
| { |
| "start": 420, |
| "end": 441, |
| "text": "Diab and Finch (2000)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Word Mapping", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To analyze local structures in monolingual vector spaces, we treat each word embedding as a point in a high-dimensional space and further embed each point into a local invariant feature space, as proposed in Chi et al. (2008) for affine registration of image point sets. The local invariant features are produced through eigendecomposition of the k-nearestneighbor (knn) graph for each point in the vector space as described below.", |
| "cite_spans": [ |
| { |
| "start": 208, |
| "end": 225, |
| "text": "Chi et al. (2008)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Initial Correspondences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For a word embedding w, we construct its knn adjacency graph, A w , such that A w is a k \u00d7 k matrix that contains the pair-wise similarities among w's knearest neighbors, including w itself. To embed the adjacency matrix in a permutation-invariant space, w is mapped to a feature vector v w that contains the sorted eigenvalues of L w , which is defined as,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Initial Correspondences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "L w = I k \u2212 \uf8ee \uf8ef \uf8f0 f (d 11 ) . . . f (d 1k ) . . . . . . . . . f (d k1 ) . . . f (d kk ) \uf8f9 \uf8fa \uf8fb where f (d ij ) = exp(\u2212d 2 ij /2\u03c3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Initial Correspondences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "2 ) is the Gaussian similarity function and d ij is the Euclidean distance between points i and j. We will refer to the vectors of sorted eigenvalues as spectral embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Initial Correspondences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "After extracting these local features for all points in X and Y , each point p in X is mapped to its nearest neighbor q in Y using the Euclidean distance between their spectral embeddings. To minimize the spurious effect of hubs-points that tend to be nearest neighbors to a large number of other points (Radovanovi\u0107 et al., 2010)-we only include the correspondences where the neighborhood is symmetric; that is, if p and q are each other's nearest neighbor in the local spectral feature space.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Initial Correspondences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The spectral embeddings are k-dimensional representations of the original word embeddings that encode the local knn structure around each word. Since a linear transformation preserves the distances between all points, the spectral embeddings allow us to map each source word to a target word with a similar knn structure. The parameter k offers a simple way to adjust the amount of contextual information used in building the spectral embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Initial Correspondences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "After extracting initial correspondences using spectral features, we iteratively update the mapping to preserve the global pair-wise distances using the iterative mapping (IM) algorithm. The objective of IM is to preserve the relative distances among the source words in the mapped space, which is achieved by locally minimizing a global loss function in iterations until convergence. Note that the spectral embeddings described in Section 3.1 are only used to extract tentative pairs for initialization. Since the spectral embeddings only capture local features, the rest of the algorithm uses the original word embeddings to preserve global distances among source words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Global Correspondences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Given a set of n monolingual embeddings X for the source language, and a set of m monolingual embeddings Y for the target language, we use the residual sum of squares loss function defined below to optimize the mapping M from X to Y :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Global Correspondences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "L = \u2211 p,q ( D X ( x p , x q ) \u2212 D Y ( M (x p ), M (x q ) ) ) 2 (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Global Correspondences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where D X and D Y are the pairwise Euclidean distances for X and Y , respectively, and p = 1, ..., n, q = 1, .., n span the indices in X. We seed the learning using the correspondences obtained by the spectral initialization method. The remaining words are mapped to a virtual token with a distance c from all other words, including itself, where c > 0 is a tunable parameter. The optimization is then carried out in a greedy manner: a source word, x i , is selected at random, and M (x i ) is selected to be the word in Y that minimizes the loss function L. This greedy algorithm yields a locally optimal mapping at each step and the final result depends on the initialization. The IM method is summarized in Algorithm 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Global Correspondences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "After optimizing the global distances using IM, we use the (source, target) pairs in M to learn a linear transformation between X and Y as described in Section 2.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Global Correspondences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "input : Word embeddings X and Y output: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Global Correspondences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Mapping M from X to Y M \u2190 spectral_initialization(X, Y ) C \u2190 cost_of _mapping(M, X, Y ) repeat Sample a word x \u2208 X for y \u2208 Y do M y \u2190 M M y (x) = y C y \u2190 cost_of _mapping(M y , X, Y ) if C y < C then M \u2190 M y C \u2190 C", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Global Correspondences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We experimented with two language pairs: French-English, and Arabic-English. French shares similar orthography and word roots with English, but for evaluating the generality of the approach, we don't utilize these similarities in any form. 1 Arabic, on the other hand, is a dissimilar language with more limited resources, and it is noisier at the word level due to clitic affixation that is challenging to tokenize. This makes it a suitable test-case for a realistic lowresource language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We extracted various datasets with different levels of similarity to test the proposed unsupervised word mapping approach. We used the following data sources:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "WMT'14 the Workshop on Machine Translation French-English corpus (Bojar et al., 2014) . This is a parallel corpus, but we don't use the sentence alignments. 1 The word embeddings are learned independently for each language; representations of subword units are not shared across languages, so morphological variations are only accounted for mono-lingually.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 85, |
| "text": "(Bojar et al., 2014)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 157, |
| "end": 158, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Source Target fr-en-p French WMT'14 English WMT'14 fr-en-s French AFP English APW fr-en-d French APW 200x English APW 199x ar-en-p Arabic UN English UN ar-en-s Arabic AFP English APW AFP Agence France Presse corpora from Gigaword datasets for English (Parker et al., 2011b ), French (Mendon\u00e7a et al., 2009 , and Arabic (Parker et al., 2011a) .", |
| "cite_spans": [ |
| { |
| "start": 259, |
| "end": 280, |
| "text": "(Parker et al., 2011b", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 281, |
| "end": 313, |
| "text": "), French (Mendon\u00e7a et al., 2009", |
| "ref_id": null |
| }, |
| { |
| "start": 327, |
| "end": 349, |
| "text": "(Parker et al., 2011a)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 7, |
| "end": 138, |
| "text": "Target fr-en-p French WMT'14 English WMT'14 fr-en-s French AFP English APW fr-en-d French APW 200x English APW 199x ar-en-p", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Label", |
| "sec_num": null |
| }, |
| { |
| "text": "APW The Associated Press corpora from Gigaword datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Label", |
| "sec_num": null |
| }, |
| { |
| "text": "UN Parallel Arabic-English corpus from UN proceedings (Ma, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 64, |
| "text": "(Ma, 2004)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Label", |
| "sec_num": null |
| }, |
| { |
| "text": "We randomly extracted 5M sentences from each corpus to create the datasets in Table 1 , which are either parallel (suffix:p), similar (suffix:s), or dissimilar (suffix:d). All datasets are within-genre to ensure that they share a common vocabulary. We tokenized the English and French datasets using the CoreNLP toolkit (Manning et al., 2014) . We also converted all characters to lower case and normalized numeric sequences to a single token. Arabic text was tokenized using the Madamira toolkit (Pasha et al., 2014) . We used the D3 tokenization scheme, and we further processed the data by separating punctuation and normalizing digits. Note that Arabic tokenization is non-deterministic due to clitic affixation, so the processed datasets still contained untokenized phrases.", |
| "cite_spans": [ |
| { |
| "start": 320, |
| "end": 342, |
| "text": "(Manning et al., 2014)", |
| "ref_id": null |
| }, |
| { |
| "start": 497, |
| "end": 517, |
| "text": "(Pasha et al., 2014)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 78, |
| "end": 85, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Label", |
| "sec_num": null |
| }, |
| { |
| "text": "For each of the datasets described above, we generated 100-dimensional word embeddings using the subword skip-gram model (Bojanowski et al., 2017) . We extracted the most frequent 2K words from the source and target languages and their embeddings for the iterative mapping (IM) method. The loss function L in equation 2 was used to guide the tuning of model parameters. We tuned k = [10, 20, 30, 40, 50] for the spectral initialization, and due to randomness in IM, we repeated each experiment 10 times and used the mapping that resulted in the smallest loss. For the final linear transformation T , we used the most frequent 50K words in both source and target languages, and we used the hubness reduction method described in Dinu et al. (2015) with c=5000.", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 146, |
| "text": "(Bojanowski et al., 2017)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 383, |
| "end": 387, |
| "text": "[10,", |
| "ref_id": null |
| }, |
| { |
| "start": 388, |
| "end": 391, |
| "text": "20,", |
| "ref_id": null |
| }, |
| { |
| "start": 392, |
| "end": 395, |
| "text": "30,", |
| "ref_id": null |
| }, |
| { |
| "start": 396, |
| "end": 399, |
| "text": "40,", |
| "ref_id": null |
| }, |
| { |
| "start": 400, |
| "end": 403, |
| "text": "50]", |
| "ref_id": null |
| }, |
| { |
| "start": 727, |
| "end": 745, |
| "text": "Dinu et al. (2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-up", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We extracted dictionary pairs from the Multilingual WordNet (Miller, 1995; Sagot and Fi\u0161er, 2008; Elkateb et al., 2006; Abouenour et al., 2013) where the source words are within the top 15K words in all datasets. From these pairs, we extracted a random sample of 2K unique (source, target) pairs for training the supervised method, and the remaining source words and all their translations were used for testing. This resulted in a total of 977 French words and 473 Arabic words for evaluation.", |
| "cite_spans": [ |
| { |
| "start": 60, |
| "end": 74, |
| "text": "(Miller, 1995;", |
| "ref_id": null |
| }, |
| { |
| "start": 75, |
| "end": 97, |
| "text": "Sagot and Fi\u0161er, 2008;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 98, |
| "end": 119, |
| "text": "Elkateb et al., 2006;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 120, |
| "end": 143, |
| "text": "Abouenour et al., 2013)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Set-up", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The unsupervised word mapping method proposed in this paper consists of three parts: given a subset of source and target words with a viable mapping, we extract tentative correspondences using spectral features as in Section 3.1. These initial pairs are used to seed the IM algorithm to refine the mapping as described in Section 3.2. The final correspondences obtained by the IM algorithm are then used as a seed dictionary to fit a linear transformation matrix between the source and target embeddings. The linear transformation step serves as a smooth generalization of the mapping since it preserves the structure of the source embeddings and can be used to extract translations of additional word pairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis and Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In order to extract a mapping between two sets of points, we first need to ensure that a viable mapping between the two sets exists. In an unsupervised setting, we can analyze the word frequencies within the monolingual corpora; it is reasonable to assume that certain words would have high frequencies in multilingual datasets that cover similar topics. Word frequencies follow a consistent power distribution that is at least partially determined by meaning (Piantadosi, 2014). Using a set of 200 fundamental words, Calude and Pagel (2011) reported a high correlation between word frequency ranks across 17 languages drawn from six language families. We analyzed the consistency of word frequencies in the French-English dataset fr-en-s using all Word-Net translation pairs where source words fall within certain frequency bands. For example, given all French words in WordNet that fall within the 1K most frequent words, we report the fraction of these words that have a translation within the 1K most frequent words in English. Among the top 10K source words, we have a total of 4,653 words with Word-Net translations, almost equally distributed among the ten frequency bands.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Frequency Analysis", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "As show in Figure 1 , at least 80% of the most frequent 1K French words have a translation within the same frequency band. Smaller overlap is observed for lower frequencies, where only about a quarter of the words have a translation within the same frequency band. This both confirms previous findings about the correlation of frequency ranks across different languages and also indicates that the correlation itself is dependent on word frequency. Note also that frequency ranks for the least frequent words are rather meaningless since most words in any finite dataset are likely to occur only once. Therefore, we carry our analysis and mapping using only the top 2K source and target words to improve the chances of having a feasible mapping between the two point sets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 19, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Frequency Analysis", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "To extract initial correspondences, we assume that similar words have similar knn graphs. Figure 2 shows colormap visualizations of knn adjacency matrices of various source and target words in fr-en-s, where red represents higher similarity scores close to 1. Most words have similar color distributions in their neighborhood graphs as their translations, although most of them are not sufficiently distinct from other words, which is expected given the small dimensionality of the spectral space. Note also that most verbs have dense adjacency graphs due to variations in conjugation that tend to be clustered densely in the vector space. Ambiguous verbs like hold have dissimilar local structures, which reflects their inconsistent usage across the two languages. Nouns, on the other hand, tend to have less dense and more distinct local structures. One exception here is monday whose closest neighbors are other days of the week that have very similar representations, which results in a dense but consistent structure. Figure 3 shows two-dimensional projections of original word embeddings and their corresponding spectral embeddings. Note that most words moved closer to their correct translations in the spectral space, where words with similar adjacency graphs are clustered in the same regions. Table 2 shows a sample of initial correspondences extracted using spectral features for IM initialization. As expected, most word pairs are incorrectly mapped but semantically related to the target translation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 90, |
| "end": 98, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 1023, |
| "end": 1031, |
| "text": "Figure 3", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 1303, |
| "end": 1310, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Nearest Neighbor Structures", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "To verify the consistency of global distances, we randomly extracted a set of 100 WordNet pairs that lie within the most frequent 2K words in fr-en-s, and and French (bottom) words: (a) \"go\" -\"partir\" (b) \"refused\" -\"refus\u00e9\" (c) \"hold\" -\"tenir\" (d) \"say\" -\"dit\" (e) \"monday\" -\"lundi\" (f) \"office\" -\"agence\" (g) \"china\" -\"chine\" (h) \"university\" -\"universit\u00e9\". we divided the set into two sets of 50 words each and calculated the pair-wise Euclidean distances among the English words ( Figure 4a ) and among the corresponding French words (Figure 4b ). For comparison, we extracted an additional random set of French words and calculated the Euclidean distances among them (Figure 4c ). As shown, the colormaps of corresponding English and French words are relatively similar compared to random words, which indicates that global pairwise distances also reflect consistent language-independent features. Tables 3 and 4 show a subset of word mappings retrieved using IM with spectral initialization on the various datasets. Recall that the objective of IM is to preserve global pairwise distances of the source words in the mapped space. Most IM mappings are either correct or related to the target translation; for example, the French word for February is mapped to September or January, which are nearest neighbors of the correct word in the target vector space and are semantically related. Using samples of 100 words randomly extracted from each dataset, we estimated the quality of word translations in terms of semantic similarity and relatedness. 2 As seen in Figure 5 , over 60% of translations are semantically related, of which at least 20% are semantically similar. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 485, |
| "end": 494, |
| "text": "Figure 4a", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 538, |
| "end": 548, |
| "text": "(Figure 4b", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 672, |
| "end": 682, |
| "text": "(Figure 4c", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 903, |
| "end": 917, |
| "text": "Tables 3 and 4", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 1565, |
| "end": 1573, |
| "text": "Figure 5", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Global Distances", |
| "sec_num": "4.3.3" |
| }, |
| { |
| "text": "Learning optimal linear transformations between multilingual vector spaces depends on the quality and size of the seed dictionaries while unsupervised mappings are expected to be noisy. In this section, we evaluate the quality of linear transformations with suboptimal supervision. Figure 6 demonstrates the performance of the transformations learned using dictionary pairs extracted from WordNet with different sizes and perturbation levels. The performance is reported in terms of precision at k, where k is the number of nearest neighbors in the target vocabulary.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 282, |
| "end": 290, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Linear Transformation", |
| "sec_num": "4.3.4" |
| }, |
| { |
| "text": "Larger dictionaries result in more accurate transformations as expected. A thousand or more accurate dictionary pairs are sufficient to learn high quality transformations, while smaller dictionary sizes result in much lower precision at all k levels. Figure 6b shows the performance using a training dictionary of size 2K perturbed with incorrect mappings. Surprisingly, the precision is reasonably high even when only 50% of the dictionary pairs are correct. This indicates that a bilingual transformation can be learned successfully using few thousand word pairs even in the presence of noise, so a reasonable amount of incorrect mappings can be tolerated.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 251, |
| "end": 261, |
| "text": "Figure 6b", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Linear Transformation", |
| "sec_num": "4.3.4" |
| }, |
| { |
| "text": "Using the (source, target) pairs extracted using IM with spectral initialization (IM-SI), we fit a linear projection matrix from the source to the target embeddings to compare the results with supervised linear transformation. We also compare with a baseline of random initialization of the IM method (IM-Rand). We evaluate the linear transformations on the different datasets in Table 1 by reporting the precision of mapping each test word to a correct translation within its k nearest neighbors, for k \u2208 {1, 5, 10, 20, 50, 100}. The results are shown in Figure 7 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 380, |
| "end": 387, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 556, |
| "end": 564, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "While the initial spectral embeddings didn't always recover the correct correspondences (see Table 2 ), these tentative pairs helped initialize the IM algorithm in the right direction for better global convergence. As shown in Figure 7 mappings. In fact, the use of spectral initialization in combination with IM to seed the transformation resulted in a precision close to the supervised baseline as seen in Figures 7a and 7b . Figure 8 shows the performance of transforming Arabic word embeddings using the various models. The supervised baseline results are lower than the French-English case, which is partly due to the low coverage of WordNet translations for Arabic (see Table 5 ). Nevertheless, we managed to recover accurate mappings and linear transformations that perform comparably to the supervised baseline. Table 5 shows some examples of correct and incorrect transformations at k = 5 on Arabic test words. Observe that even in the case of incorrect matches, the k nearest neighbors are related to the target words in meaning. For example, all five nearest neighbors of the word ('\u202b'/'\ufe91\ufee8\ufe8e\ufef3\ufe94\u202cbuilding'), are building-related, such as 'tower', 'parking', 'three-story', and 'mall'.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 93, |
| "end": 101, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 228, |
| "end": 236, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 409, |
| "end": 426, |
| "text": "Figures 7a and 7b", |
| "ref_id": null |
| }, |
| { |
| "start": 429, |
| "end": 437, |
| "text": "Figure 8", |
| "ref_id": null |
| }, |
| { |
| "start": 677, |
| "end": 684, |
| "text": "Table 5", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 821, |
| "end": 828, |
| "text": "Table 5", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We proposed an unsupervised approach for learning linear transformations between word embeddings of different languages without the use of seed dictionaries or any prior bilingual alignment. The proposed method exploits various features and structures in monolingual vector spaces, namely word frequencies, local neighborhood structures, and global pairwise distances, assuming that these structures are sufficiently consistent across languages. We verified experimentally that, given comparable multilingual corpora, accurate transformations across languages can be retrieved using only their monolingual word embeddings for clues.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Words are considered semantically similar if they are synonymous or identical in meaning regardless of syntactic category; for example {happy, glad, happiness}. Semantically related words are somewhat related in meaning but not necessarily synonymous, such as {food, fruit, restaurant}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank our action editor Sebastian Pad\u00f3 and anonymous reviewers for their helpful suggestions that significantly improved the quality of this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| }, |
| { |
| "text": "Lahsen Abouenour, Karim Bouzoubaa, and Paolo Rosso.2013. On the evaluation and improvement of Arabic ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "References", |
| "sec_num": null |
| }, |
| { |
| "text": " 'requirement', 'prerequisite', 'demand' 'principled', 'onus', 'insists', 'insisting', 'rests' 'insistence', 'objection', 'demands', 'refusal', 'deference' \u202b\ufe91\ufee8\ufe8e\ufef3\ufe94\u202c 'building', 'edifice' '<num>-bed', 'floors', 'tower', 'dormitory', 'playground' 'parking', 'three-story', 'six-story', 'mall', 'five-story'", |
| "cite_spans": [ |
| { |
| "start": 1, |
| "end": 243, |
| "text": "'requirement', 'prerequisite', 'demand' 'principled', 'onus', 'insists', 'insisting', 'rests' 'insistence', 'objection', 'demands', 'refusal', 'deference' \u202b\ufe91\ufee8\ufe8e\ufef3\ufe94\u202c 'building', 'edifice' '<num>-bed', 'floors', 'tower', 'dormitory', 'playground'", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A neural probabilistic language model", |
| "authors": [ |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "R\u00e9jean", |
| "middle": [], |
| "last": "Ducharme", |
| "suffix": "" |
| }, |
| { |
| "first": "Pascal", |
| "middle": [], |
| "last": "Vincent", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Jauvin", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "", |
| "issue": "", |
| "pages": "1137--1155", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Research, pages 1137-1155.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Enriching word vectors with subword information", |
| "authors": [ |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "135--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association of Computational Linguistics, pages 135-146.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Findings of the 2014 Workshop on Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Ondrej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Buck", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Federmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Leveling", |
| "suffix": "" |
| }, |
| { |
| "first": "Christof", |
| "middle": [], |
| "last": "Monz", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Pecina", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Post", |
| "suffix": "" |
| }, |
| { |
| "first": "Herve", |
| "middle": [], |
| "last": "Saint-Amand", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "12--58", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint- Amand, et al. 2014. Findings of the 2014 Workshop on Statistical Machine Translation. In Proceedings of the Ninth Workshop on Statistical Machine Transla- tion, pages 12-58.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "How do we use language? Shared patterns in the frequency of word use across 17 world languages", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Andreea", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Calude", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pagel", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Philosophical Transactions of the Royal Society of London B: Biological Sciences", |
| "volume": "", |
| "issue": "", |
| "pages": "1101--1107", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreea S. Calude and Mark Pagel. 2011. How do we use language? Shared patterns in the frequency of word use across 17 world languages. Philosophical Transactions of the Royal Society of London B: Bio- logical Sciences, pages 1101-1107.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Higher dimensional affine registration and vision applications", |
| "authors": [ |
| { |
| "first": "Yu-Tseh", |
| "middle": [], |
| "last": "Chi", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "M" |
| ], |
| "last": "Shahed", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Ho", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Hsuan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "European Conference on Computer Vision", |
| "volume": "", |
| "issue": "", |
| "pages": "256--269", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yu-Tseh Chi, S.M. Nejhum Shahed, Jeffrey Ho, and Ming-Hsuan Yang. 2008. Higher dimensional affine registration and vision applications. In European Con- ference on Computer Vision, pages 256-269.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Word translation without parallel data", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Lample", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc'aurelio", |
| "middle": [], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "Ludovic", |
| "middle": [], |
| "last": "Denoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Herv\u00e9", |
| "middle": [], |
| "last": "J\u00e9gou", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1710.04087v3" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ran- zato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087 v3.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A statistical wordlevel translation model for comparable corpora", |
| "authors": [ |
| { |
| "first": "Mona", |
| "middle": [ |
| "T" |
| ], |
| "last": "Diab", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Finch", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Content-Based Multimedia Information Access", |
| "volume": "", |
| "issue": "", |
| "pages": "1500--1508", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mona T. Diab and Steve Finch. 2000. A statistical word- level translation model for comparable corpora. In Content-Based Multimedia Information Access, pages 1500-1508.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Pseudoaligned multilingual corpora", |
| "authors": [ |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Diaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Donald", |
| "middle": [], |
| "last": "Metzler", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "International Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "2727--2732", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fernando Diaz and Donald Metzler. 2007. Pseudo- aligned multilingual corpora. In International Joint Conference on Artificial Intelligence, pages 2727- 2732.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Improving zero-shot learning by mitigating the hubness problem", |
| "authors": [ |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Dinu", |
| "suffix": "" |
| }, |
| { |
| "first": "Angeliki", |
| "middle": [], |
| "last": "Lazaridou", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of International Conference on Learning Representations Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of International Conference on Learning Representations Workshop.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Building a WordNet for Arabic", |
| "authors": [ |
| { |
| "first": "Sabri", |
| "middle": [], |
| "last": "Elkateb", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Black", |
| "suffix": "" |
| }, |
| { |
| "first": "Horacio", |
| "middle": [], |
| "last": "Rodr\u00edguez", |
| "suffix": "" |
| }, |
| { |
| "first": "Musa", |
| "middle": [], |
| "last": "Alkhalifa", |
| "suffix": "" |
| }, |
| { |
| "first": "Piek", |
| "middle": [], |
| "last": "Vossen", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Pease", |
| "suffix": "" |
| }, |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of The International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "22--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sabri Elkateb, William Black, Horacio Rodr\u00edguez, Musa Alkhalifa, Piek Vossen, Adam Pease, and Christiane Fellbaum. 2006. Building a WordNet for Arabic. In Proceedings of The International Conference on Lan- guage Resources and Evaluation, pages 22-28.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "What analogies reveal about word vectors and their compositionality", |
| "authors": [ |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Finley", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephanie", |
| "middle": [], |
| "last": "Farmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Serguei", |
| "middle": [], |
| "last": "Pakhomov", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 6th Joint Conference on Lexical and Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gregory Finley, Stephanie Farmer, and Serguei Pakho- mov. 2017. What analogies reveal about word vectors and their compositionality. In Proceedings of the 6th Joint Conference on Lexical and Computational Se- mantics, pages 1-11.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "BilBOWA: Fast bilingual distributed representations without word alignments", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Gouws", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast bilingual distributed represen- tations without word alignments. In Proceedings of the", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "English Gigaword Fifth Edition (LDC2011T07)", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Parker", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Graff", |
| "suffix": "" |
| }, |
| { |
| "first": "Junbo", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Ke", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuaki", |
| "middle": [], |
| "last": "Maeda", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011b. English Gigaword Fifth Edi- tion (LDC2011T07). Linguistic Data Consortium.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "MADAMIRA: A fast, comprehensive tool for morphological analysis and disambiguation of Arabic", |
| "authors": [ |
| { |
| "first": "Arfath", |
| "middle": [], |
| "last": "Pasha", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohamed", |
| "middle": [], |
| "last": "Al-Badrashiny", |
| "suffix": "" |
| }, |
| { |
| "first": "Mona", |
| "middle": [ |
| "T" |
| ], |
| "last": "Diab", |
| "suffix": "" |
| }, |
| { |
| "first": "Ahmed", |
| "middle": [ |
| "El" |
| ], |
| "last": "Kholy", |
| "suffix": "" |
| }, |
| { |
| "first": "Ramy", |
| "middle": [], |
| "last": "Eskander", |
| "suffix": "" |
| }, |
| { |
| "first": "Nizar", |
| "middle": [], |
| "last": "Habash", |
| "suffix": "" |
| }, |
| { |
| "first": "Manoj", |
| "middle": [], |
| "last": "Pooleery", |
| "suffix": "" |
| }, |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of The International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "1094--1101", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arfath Pasha, Mohamed Al-Badrashiny, Mona T. Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. MADAMIRA: A fast, comprehensive tool for morphological analysis and disambiguation of Ara- bic. In Proceedings of The International Conference on Language Resources and Evaluation, pages 1094- 1101.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Zipf's word frequency law in natural language: A critical review and future directions", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Steven", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Piantadosi", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Psychonomic Bulletin & Review", |
| "volume": "", |
| "issue": "", |
| "pages": "1112--1130", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven T. Piantadosi. 2014. Zipf's word frequency law in natural language: A critical review and future direc- tions. Psychonomic Bulletin & Review, pages 1112- 1130.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Hubs in space: Popular nearest neighbors in high-dimensional data", |
| "authors": [ |
| { |
| "first": "Milo\u0161", |
| "middle": [], |
| "last": "Radovanovi\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandros", |
| "middle": [], |
| "last": "Nanopoulos", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirjana", |
| "middle": [], |
| "last": "Ivanovi\u0107", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "", |
| "issue": "", |
| "pages": "2487--2531", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Milo\u0161 Radovanovi\u0107, Alexandros Nanopoulos, and Mir- jana Ivanovi\u0107. 2010. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Ma- chine Learning Research, pages 2487-2531.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Identifying word translations in non-parallel texts", |
| "authors": [ |
| { |
| "first": "Reinhard", |
| "middle": [], |
| "last": "Rapp", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "320--322", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics, pages 320-322.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Building a free French WordNet from multilingual resources", |
| "authors": [ |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| }, |
| { |
| "first": "Darja", |
| "middle": [], |
| "last": "Fi\u0161er", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "On-toLex", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beno\u00eet Sagot and Darja Fi\u0161er. 2008. Building a free French WordNet from multilingual resources. In On- toLex.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Cross-lingual models of word embeddings: An empirical comparison", |
| "authors": [ |
| { |
| "first": "Shyam", |
| "middle": [], |
| "last": "Upadhyay", |
| "suffix": "" |
| }, |
| { |
| "first": "Manaal", |
| "middle": [], |
| "last": "Faruqui", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1661--1670", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embed- dings: An empirical comparison. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics, pages 1661-1670.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Detecting highly confident word translations from comparable corpora without any prior knowledge", |
| "authors": [ |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Vuli\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Francine", |
| "middle": [], |
| "last": "Moens", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "449--459", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2012. Detect- ing highly confident word translations from compara- ble corpora without any prior knowledge. In Proceed- ings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 449-459.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Bilingual word embeddings from non-parallel document-aligned data applied to bilingual lexicon induction", |
| "authors": [ |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Vuli\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Francine", |
| "middle": [], |
| "last": "Moens", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "719--725", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2015. Bilingual word embeddings from non-parallel document-aligned data applied to bilingual lexicon induction. In Pro- ceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing, pages 719-725.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Manifold alignment using procrustes analysis", |
| "authors": [ |
| { |
| "first": "Chang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sridhar", |
| "middle": [], |
| "last": "Mahadevan", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 25th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1120--1127", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chang Wang and Sridhar Mahadevan. 2008. Manifold alignment using procrustes analysis. In Proceedings of the 25th International Conference on Machine Learn- ing, pages 1120-1127.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Manifold alignment without correspondence", |
| "authors": [ |
| { |
| "first": "Chang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sridhar", |
| "middle": [], |
| "last": "Mahadevan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "1273--1278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chang Wang and Sridhar Mahadevan. 2009. Manifold alignment without correspondence. In Proceedings of the Twenty-First International Joint Conference on Ar- tificial Intelligence, pages 1273-1278.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Adversarial training for unsupervised bilingual lexicon induction", |
| "authors": [ |
| { |
| "first": "Meng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Huanbo", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Maosong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1959--1970", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguis- tics, pages 1959-1970.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "y end end until convergence or max iterations; Algorithm 1: Iterative mapping with spectral initialization", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "text": "Frequency overlap (percentage of WordNet source words that have a translation within the same frequency band) in fr-en-s dataset.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF2": { |
| "text": "colormaps of knn adjacency matrices (k=10) of corresponding English (top)", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF4": { |
| "text": "PCA projections of (a) word embeddings and (b) spectral embeddings of English (black, boldface) and French (blue) words from fr-en-s dataset.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF5": { |
| "text": "Colormaps of Euclidean distances between random sets of (a) English words, (b) their corresponding French translations, and (c) French words that are not translations of the words in (a). The Euclidean distance matrices shown here are asymmetric; the horizontal and vertical directions correspond to disjoint sets of 50 words each within the same language, for a total of 50 \u00d7 50 distances.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF7": { |
| "text": "Quality estimation of IM-SI word translations.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF8": { |
| "text": ", initializing IM with random pairs resulted in poor performance while spectral initialization helped converge to plausible Precision at k by noise level", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF9": { |
| "text": "Bilingual transformation precision at k with different characteristics (size and noise level) of the seed dictionary. The transformations are learned on en-frfr-en-d Precision at k for linear transformations learned with IM-SI mappings vs. random initialization and the supervised baseline on the French-English datasets.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF0": { |
| "num": null, |
| "content": "<table/>", |
| "text": "French-English and Arabic-English datasets. fren-p and ar-en-p are parallel datasets, and the remaining are non-parallel. fr-en-d is extracted from separate time periods to ensure that there is no overlap in content.", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "num": null, |
| "content": "<table><tr><td>: A sample of initial pairs extracted using spectral</td></tr><tr><td>embeddings to initialize IM for fr-en-s. Source indicates</td></tr><tr><td>the source French word, Translation is the gold English</td></tr><tr><td>correspondent, and Initial Mapping is the first locally in-</td></tr><tr><td>duced correspondent.</td></tr></table>", |
| "text": "", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "num": null, |
| "content": "<table><tr><td>: A random sample of word mappings from</td></tr><tr><td>French to English using IM with spectral initialization.</td></tr><tr><td>These pairs are later used to fit a linear projection matrix</td></tr><tr><td>between the source and target embeddings. Correct map-</td></tr><tr><td>pings are indicated in italics</td></tr></table>", |
| "text": "", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "num": null, |
| "content": "<table><tr><td>: A random sample of word mappings from Ara-</td></tr><tr><td>bic to English retrieved using IM with spectral initializa-</td></tr><tr><td>tion. Correct mappings are indicated in italics</td></tr></table>", |
| "text": "", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "num": null, |
| "content": "<table/>", |
| "text": "Examples of correct and incorrect transformations at k = 5 for Arabic-English using the unsupervised IM-SI mappings to fit a linear projection matrix.", |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |