ACL-OCL / Base_JSON /prefixS /json /sltu /2020.sltu-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:03.824110Z"
},
"title": "MultiSeg: Parallel Data and Subword Information for Learning Bilingual Embeddings in Low Resource Scenarios",
"authors": [
{
"first": "Efsun",
"middle": [],
"last": "Sarioglu Kayi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Vishal",
"middle": [],
"last": "Anand",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Distributed word embeddings have become ubiquitous in natural language processing as they have been shown to improve performance in many semantic and syntactic tasks. Popular models for learning cross-lingual word embeddings do not consider the morphology of words. We propose an approach to learn bilingual embeddings using parallel data and subword information that is expressed in various forms, i.e. character n-grams, morphemes obtained by unsupervised morphological segmentation and byte pair encoding. We report results for three low resource languages (Swahili, Tagalog, and Somali) and a high resource language (German) in a simulated a low-resource scenario. Our results show that our method that leverages subword information outperforms the model without subword information, both in intrinsic and extrinsic evaluations of the learned embeddings. Specifically, analogy reasoning results show that using subwords helps capture syntactic characteristics. Semantically, word similarity results and intrinsically, word translation scores demonstrate superior performance over existing methods. Finally, qualitative analysis also shows better-quality cross-lingual embeddings particularly for morphological variants in both languages.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Distributed word embeddings have become ubiquitous in natural language processing as they have been shown to improve performance in many semantic and syntactic tasks. Popular models for learning cross-lingual word embeddings do not consider the morphology of words. We propose an approach to learn bilingual embeddings using parallel data and subword information that is expressed in various forms, i.e. character n-grams, morphemes obtained by unsupervised morphological segmentation and byte pair encoding. We report results for three low resource languages (Swahili, Tagalog, and Somali) and a high resource language (German) in a simulated a low-resource scenario. Our results show that our method that leverages subword information outperforms the model without subword information, both in intrinsic and extrinsic evaluations of the learned embeddings. Specifically, analogy reasoning results show that using subwords helps capture syntactic characteristics. Semantically, word similarity results and intrinsically, word translation scores demonstrate superior performance over existing methods. Finally, qualitative analysis also shows better-quality cross-lingual embeddings particularly for morphological variants in both languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Considering the internal word structure when learning monolingual word embeddings has shown to produce better quality word representations, particularly for morphologically rich languages (Luong et al., 2013; Bojanowski and others, 2017) . However, the most popular approaches for learning cross-lingual embeddings have yet to use subword information directly during learning in the cross-lingual space. One of the most widely used approaches for monolingual embeddings (fastText) (Bojanowski and others, 2017) extends the continuous skip-gram model with negative sampling (SGNS) (Mikolov et al., 2013a) to learn subword information given as character n-grams and then representing words as the sum of the n-gram vectors. SGNS has also been used to learn bilingual embeddings using parallel data, the most notable approach being BiSkip (a.k.a, BiVec) (Luong et al., 2015a ). This joint model learns bilingual word representations by exploiting both the context co-occurrence information through the monolingual component and the meaning equivalent signals from the bilingual constraint given by the parallel data. We propose a combined approach that integrates subword information directly when learning bilingual embeddings leveraging the two extensions of the SGNS approach. Our model extends the BiSkip model that uses parallel data by learning representations of subwords and then representing words as the sum of the subword vectors (as was done in the monolingual case for character n-grams (Bojanowski and others, 2017) ). As subwords, we consider character ngrams , morphemes obtained using a state-of-the-art unsupervised morphological segmentation approach (Eskander et al., 2018) and byte pair encoding (BPE) (Sennrich et al., 2016) .",
"cite_spans": [
{
"start": 188,
"end": 208,
"text": "(Luong et al., 2013;",
"ref_id": "BIBREF19"
},
{
"start": 209,
"end": 237,
"text": "Bojanowski and others, 2017)",
"ref_id": null
},
{
"start": 481,
"end": 510,
"text": "(Bojanowski and others, 2017)",
"ref_id": null
},
{
"start": 580,
"end": 603,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF22"
},
{
"start": 851,
"end": 871,
"text": "(Luong et al., 2015a",
"ref_id": "BIBREF20"
},
{
"start": 1497,
"end": 1526,
"text": "(Bojanowski and others, 2017)",
"ref_id": null
},
{
"start": 1667,
"end": 1690,
"text": "(Eskander et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 1720,
"end": 1743,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We report results for learning bilingual embeddings for three low resource languages (Swahili-swa, Tagalog-tgl, and Somali-som) and a high resource language (Germandeu), all of which are morphologically rich languages. For German, we simulate a low-resource learning scenario (100K parallel data). Our results show that our method that leverages subword information outperforms the BiSkip approach, both in intrinsic and extrinsic evaluations of the learned embeddings (Section 3.). Specifically, analogy reasoning results show that using subwords helps capture syntactic characteristics. Qualitative and intrinsic analysis also shows better-quality cross-lingual embeddings particularly for morphological variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "* Equal Contribution",
"sec_num": null
},
{
"text": "Our proposed method to learn bilingual embeddings uses both parallel data and information about the internal structure of words in both languages during training. In SGNS, given a sequence of words w 1 , ..., w T , the objective is to maximize average log probability where c represents the context:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1/T T t=1 c logp(w c |w t ),",
"eq_num": "(1)"
}
],
"section": "Methodology",
"sec_num": "2."
},
{
"text": "This probability can be calculated with a softmax function as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2."
},
{
"text": "logp(w c |w t ) = e u T w t vw c W e u T w t vw (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2."
},
{
"text": "where W is the size of the vocabulary, and u wt and v wc are the corresponding word vector representations for w c and w t in R . BiSkip (Luong et al., 2015b) uses sentence-level aligned data (parallel data) to learn bilingual embeddings by extending the SGNS to predict the surrounding words in each language, using SGNS for both the monolingual and cross-lingual objective. In other words, given two languages l 1 and l 2 , BiSkip model trains four SGNS models Wax aanan si fiican umaqlin ayuu ku celceliyey . he repeated something that I could not hear well .",
"cite_spans": [
{
"start": 137,
"end": 158,
"text": "(Luong et al., 2015b)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2."
},
{
"text": "Wax aan si fiic maql ayuu ku celcel . he repeat someth that I could not hear well . Alignment Wax:something aanan:something aanan:I si:that si:could fiican:well umaqlin:hear ayuu:not ku:NA celceliyey:repeated Table 1 : English-Somali Alignment jointly which predict words between the following pairs of languages:",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 216,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stem",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "l 1 \u2192 l 1 , l 2 \u2192 l 2 , l 1 \u2192 l 2 , l 2 \u2192 l 1",
"eq_num": "(3)"
}
],
"section": "Stem",
"sec_num": null
},
{
"text": "However, in this model each word is assigned a distinct vector. To take into account the morphology of words in both languages, we extend BiSkip to include subword information during learning. The approach is based on the idea introduced by Bojanowski and others (2017) for the monololingual fastText embeddings, where the SGNS is extended to learn the representation of character n-grams and then represent the word as the sum of its n-gram vectors as in Equation 4 where N is set of character n-grams and c n is the word embedding for n-gram n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stem",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w = 1/|N | n\u2208N c n",
"eq_num": "(4)"
}
],
"section": "Stem",
"sec_num": null
},
{
"text": "In our approach, which we call MultiSeg, we consider subwords as character n-grams (between 3 and 6 as in fast-Text), or as morphemes, or as byte pair enconding (BPE) Figure 2 : Alignment algorithm that are computed by merging most frequent adjacent pairs of characters in the corpora. When considering morphemes as subwords, we either split the words into prefix, stem and suffix, or we consider all morphemes, that is the stem and all affixes. We use an unsupervised morphological segmentation approach (Eskander et al., 2018; Eskander et al., 2019) based on Adaptor Grammars that has been shown to produce state-of-the-art results for a variety of morphologically rich languages (e.g., Turkish, Arabic, and 4 Uto-Aztecan languages which are low resource and polysynthetic). We denote our proposed method using each subword type as MultiSeg CN (uses char n-gram as representation during training), MultiSeg M (uses prefix, stem, suffix morphemes), MultiSeg morph all (uses all morphemes), MultiSeg BP E (uses byte pair encondings), MultiSeg All (uses all subword types as representations during training). Figure 1a shows all possible segmentations for a given word in a language. Once best alignment is chosen, e.g. wordlevel or stem-level alignment, it is passed as input data to the training algorithm. As an example, Figure 1b shows subword structure i.e. morphological segmentation, of two parallel sentences, one in English and the other in low resource language. First sentence consists of five words and the corresponding aligned sentence consists of four words and internally, they are made up of various counts of segments i.e. one root and one or more prefixes and suffixes. For the current word in training (highlighted in the Figure 1b ), corresponding aligned word in the other sentence is also highlighted and their internal alignment is shown. Similarly, within the same sentence, the current word's internal alignment with neighboring words in its context is shown. For aligning segments of the words, we consider several possibilities i.e. word and stem-based alignment and pick the best one as shown in Figure 2 . Example Somali and English sentences and their stemmed output is shown in Table 1. In the case that alignment based on stem performs better than alignment based on words, word level alignment can still be constructed through stem-to-word connection. ",
"cite_spans": [
{
"start": 505,
"end": 528,
"text": "(Eskander et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 529,
"end": 551,
"text": "Eskander et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 167,
"end": 175,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1108,
"end": 1117,
"text": "Figure 1a",
"ref_id": null
},
{
"start": 1323,
"end": 1332,
"text": "Figure 1b",
"ref_id": null
},
{
"start": 1741,
"end": 1751,
"text": "Figure 1b",
"ref_id": null
},
{
"start": 2125,
"end": 2133,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stem",
"sec_num": null
},
{
"text": "This section describes the data used for training our bilingual word embeddings and our evaluation setup, including the evaluation datasets and measures. We build bilingual embeddings for Swahili-English, Tagalog-English, Somali-English and German-English. For Swahili, Tagalog and Somali, we use parallel corpora provided by the IARPA MATERIAL program 1 . Data statistics for each language i.e. size of parallel corpora, vocabulary and dictionaries, are listed in Table 2 . For German, we use the Europarl dataset (Koehn, 2005) . Since the size of this parallel dataset is much larger than the others (1,908,920), we select a random subset of 100K parallel sentence to imitate a low-resource scenario. This is important as parallel corpora is more costly to obtain than other bilingual resources, such as dictionaries. For all the models, symmetric word alignments from parallel corpora are learned via the fast align tool (Dyer et al., 2013) . For aligning segments of the words, we compute word and stembased alignments and between the two, aligning based on stem performs better across all languages and dimensions. We train embeddings with different dimensions, d = 40 and d = 300, for 20 iterations. Our code for training Multi-Seg embeddings, pre-trained cross-lingual embeddings and evaluation scripts such as word translation score and coverage will be publicly available 2 . We evaluate our approach both intrinsically and extrinsically on various monolingual and cross-lingual tasks and compare the performance to the BiSkip baseline. Recall, that BiSkip does not use any subword information when training the bilingual embeddings.",
"cite_spans": [
{
"start": 515,
"end": 528,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF14"
},
{
"start": 924,
"end": 943,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 465,
"end": 472,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Training of Bilingual Embeddings",
"sec_num": "2.1."
},
{
"text": "Task. An important intrinsic evaluation task for learning bilingual embeddings is the word translation task a.k.a. bilingual dictionary induction which assesses how good bilingual embeddings are at detecting word pairs that are semantically similar across languages by checking if translationally equivalent words in different languages are nearby in the embedding space. As our evaluation dictionaries, we use bilingual dictionaries derived from Wiktionary using Wikt2Dict tool (Acs et al., 2013) which has polysemous entries in both directions. We generate Swahili-English, Tagalog-English. Somali-English and German-English dictionaries (the sizes are given in Table 2 ). We argue that these dictionaries are more reliable as evaluation dictionaries compared to Google Translate dictionaries, which are generally used only for evaluation. We calculate precision at k, where k = 1 and k = 10) (P @1, P @10) for both source-to-target and target-to-source directions and take an average of these scores as the final accuracy. We take the definition of the task from (Ammar et al., 2016) . In conjunction with P @1 and P @10, we also report coverage as in (Ammar et al., 2016) , given as the total number of common word pairs (l 1 , w 1 ), (l 2 , w 2 ) that exist in both the test dictionary and the embedding, divided by size of the dictionary. The precision at 1 (P@1) score for",
"cite_spans": [
{
"start": 479,
"end": 497,
"text": "(Acs et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 1067,
"end": 1087,
"text": "(Ammar et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 1156,
"end": 1176,
"text": "(Ammar et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 664,
"end": 672,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Intrinsic Evaluation Word Translation",
"sec_num": "2.1.1."
},
{
"text": "German Swahili Tagalog Somali Coverage: 0.159 Coverage: 0.212 Coverage: 0.116 Coverage: 0.195 P@1 P@10 P@1 P@10 P@1 P@10 P@1 P@10 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Dimension",
"sec_num": null
},
{
"text": "(l 1 , w 1 ), (l 2 , w 2 ) both of which are covered by an embedding E is 1 if cosine(E(l 1 , w 1 ), E(l 2 , w 2 )) \u2265 cosine(E(l 1 , w 1 ), E(l 2 , w 2 )) \u2200w 2 \u2208 G l2 here G l2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Dimension",
"sec_num": null
},
{
"text": "is the set of words of language l 2 in the evaluation dataset, and cosine is the cosine similarity function. Otherwise, the score is 0. The overall score is the average score for all word pairs covered by the embedding. Precision at 10 (P @10) is computed as the fraction of the entries (w 1 , w 2 ) in the test dictionary, for which w 2 belongs to the top-10 neighbors of the word vector of w 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Dimension",
"sec_num": null
},
{
"text": "Analogy Reasoning Task. Analogy reasoning task consists of questions of the form if A is to B then what is C to D, where D must be predicted. Question is assumed to be correctly answered if the closest word to the vector is exactly the same as the correct word in the question. We use the datasets for English (Mikolov et al., 2013b) which consist of 8,869 semantic and 10,675 syntactic questions. Some of the example semantic categories are Capital City, Currency, City-in-State and Man-Woman and some of the example syntactic categories are opposite, superlative, plural nouns and past tense.",
"cite_spans": [
{
"start": 310,
"end": 333,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Dimension",
"sec_num": null
},
{
"text": "Word Similarity Task. Word similarity datasets contain word pairs which are assigned similarity ratings by humans. These rankings are then compared with cosine similarity between the word vectors based on the Spearman's rank correlation coefficient to estimate how well they capture semantic relatedness. In our evaluations, we use three word similarity datasets: WordSimilarity-353 (WS353) (Finkelstein et al., 2001) , Stanford Rare Word (RW) similarity dataset (Luong et al., 2013) , and Stanford's Contextual Word Similarities (SCWS) dataset (Huang et al., 2012).",
"cite_spans": [
{
"start": 391,
"end": 417,
"text": "(Finkelstein et al., 2001)",
"ref_id": "BIBREF11"
},
{
"start": 463,
"end": 483,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Dimension",
"sec_num": null
},
{
"text": "As extrinsic evaluation of our embeddings in a downstream semantic task, we use Cross-Language Document Classification (CLDC) 3 (Klementiev et al., 2012) . In this task, a document classifier is trained using the document representations derived from the cross-lingual embeddings for language l 1 , and then the trained model is tested on documents from language l 2 . The classifier is trained using the averaged perceptron algorithm and the document vectors are the averaged vector of words in the document weighted by their idf values. For this task, we only have dataset for German-English, and we report results where we train on 1, 000 documents and test on 5, 000 to be consistent with the original BiSkip setup.",
"cite_spans": [
{
"start": 128,
"end": 153,
"text": "(Klementiev et al., 2012)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "2.1.2."
},
{
"text": "The performance on the word translation task for all languages is shown in Table 3 , where the best scores are highlighted in red for dimension 40 and blue for dimension 300. MultiSeg methods outperform BiSkip for all languages both for P @1 and P @10. Among MultiSeg methods, across languages, morphological segmentation based models have the best scores followed by MultiSeg All especially for P 10 and with 40 dimension. MultiSeg CN with 300 dimension also performs well across languages specifically for P 10. Through an error analysis, we noticed that some of the performance gain for MultiSeg was due to the fact that these models were able to learn word translations of morphological variants of words. Table 4 lists some of Figure 5 for all MultiSeg approaches. As an illustration, in Figure 5d , qaranimo is close to togetherness while the same (nationhood) is also shown in a coarser fashion in 5c, while other approaches could not capture this representation.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 82,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 710,
"end": 717,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 732,
"end": 740,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 793,
"end": 802,
"text": "Figure 5d",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3."
},
{
"text": "Word similarity, analogy reasoning and CLDC results for English and German are summarized in Table 5 where Spearman's rank correlation coefficients (\u03c1 * 100) are reported for word similarity task and accuracy is reported for analogy reasoning task (as percentages) and for CLDC. MultiSeg approaches outperform BiSkip for all languages and for all tasks except semantic analogy. For syntactic and overall analogy reasoning scores, MultiSeg All performs the best which demonstrates that with better crosslingual embedding, a performance increase is seen in monolingual space, i.e. English. For CLDC task, morphological segmentation approaches, i.e. MultiSeg M and MultiSeg M All perform the best. For word similarity task, overall MultiSeg BP E and MultiSeg All performs the best for English and MultiSeg BP E and MultiSeg M All for German.",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3."
},
{
"text": "Word similarity and analogy reasoning results for English using low resource languages' cross-lingual embeddings are shown in Table 6 . Again, MultiSeg approaches outperform BiSkip for all languages and for all tasks except for Somali semantic analogy and among them, MultiSeg All performs the best overall for all languages. A more detailed analysis of analogy reasoning task (Mikolov et al., 2013b) including breakdown of each semantic and syntactic cat- egories can be seen in Figure 6 for Swahili. 4 Semantic analogy task consists of questions such as capital countries, currency, city-in-the-state and hence it does not necessarily benefit from our subword based approach. For German and Somali, BiSkip has the best performance in this category whereas for Swahili and Tagalog MultiSeg approaches perform the best. On the other hand, syntactic analogy consists of questions about base/comparative/superlative forms of adjectives, singular/plural and possessive/non-possessive There are several ways of incorporating morphological information into word embeddings. One approach adapted by fastText embeddings (Bojanowski and others, 2017) is to use character n-grams. In addition to whole words, several sizes of n-grams, i.e. three to six, are used during training of the skip-gram model. This approach is languageagnostic and can be adapted to new languages easily. Another approach is to have morphological segmentation as a preprocessing step before training the embeddings (Luong et al., 2013) . Other techniques predict both the word and its morphological tag (Cotterell and Sch\u00fctze, 2015) however, all these approaches are monolingual and work on one language at a time. The most closely related work to ours is (Chaudhary et al., 2018) which uses the fastText (Bojanowski and others, 2017 ) approach to include morphological information when learning cross-lingual embeddings by combining the high-resource and low resource corpora and training using the skip-gram objective. Their evaluation is limited to named-entity-recognition and machine translation and requires detailed linguistically tagged words on a large monolingual corpus for related languages. Our approach incorporates supervision through small amount of parallel corpora while training on subwords for any two languages including unrelated ones.",
"cite_spans": [
{
"start": 377,
"end": 400,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF23"
},
{
"start": 1113,
"end": 1142,
"text": "(Bojanowski and others, 2017)",
"ref_id": null
},
{
"start": 1482,
"end": 1502,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 1570,
"end": 1599,
"text": "(Cotterell and Sch\u00fctze, 2015)",
"ref_id": "BIBREF6"
},
{
"start": 1723,
"end": 1747,
"text": "(Chaudhary et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 1772,
"end": 1800,
"text": "(Bojanowski and others, 2017",
"ref_id": null
}
],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 480,
"end": 488,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3."
},
{
"text": "Bilingual word embeddings create shared semantic spaces in multi-lingual contexts and can be trained using different types of bilingual resources. Techniques such as BiSkip (Luong et al., 2015b) use sentence aligned parallel corpora, whereas BiCVM (Vuli\u0107 and Moens, 2015) use document aligned comparable corpora. There are also techniques that map pre-trained monolingual embeddings into shared space via bilingual dictionaries (Lample et al., 2018b; Artetxe et al., 2018) . Finally, there are semi-supervised and unsupervised methods that require little to none bilingual supervision (Lample et al., 2018a; Artetxe and others, 2018) . Among these techniques, we adapted BiSkip to learn embeddings jointly. This eliminates the need for having pre- Figure 6 : Swahili Analogy Reasoning Task Semantic and Syntactic Categories trained monolingual embeddings and it has been shown to have better accuracy than comparable corpora based approaches (Upadhyay et al., 2016) . In addition, our intrinsic evaluations of semi-supervised and unsupervised embeddings did not perform well. Recently, pre-trained contextual embeddings have been extended to other languages, e.g. XLM (Lample and Conneau, 2019), cross-lingual ELMo (Schuster et al., 2019) and multilingual BERT (Devlin et al., 2019) shown to have promising results on a variety of tasks. However, they are not as amenable in low resource scenarios where they tend to overfit. They are also not good at fine-grained linguistic tasks (Liu et al., 2019) and geared toward sentence level tasks. In addition, if a pretrained model is not available, it requires lots of computing power and data to be trained from scratch. For instance, XLM model uses 200K for low resource and 18 million for German. For parallel data, they use 165K for Swahili and 9 million for German.",
"cite_spans": [
{
"start": 173,
"end": 194,
"text": "(Luong et al., 2015b)",
"ref_id": "BIBREF21"
},
{
"start": 248,
"end": 271,
"text": "(Vuli\u0107 and Moens, 2015)",
"ref_id": "BIBREF29"
},
{
"start": 428,
"end": 450,
"text": "(Lample et al., 2018b;",
"ref_id": "BIBREF17"
},
{
"start": 451,
"end": 472,
"text": "Artetxe et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 585,
"end": 607,
"text": "(Lample et al., 2018a;",
"ref_id": "BIBREF16"
},
{
"start": 608,
"end": 633,
"text": "Artetxe and others, 2018)",
"ref_id": null
},
{
"start": 942,
"end": 965,
"text": "(Upadhyay et al., 2016)",
"ref_id": "BIBREF27"
},
{
"start": 1215,
"end": 1238,
"text": "(Schuster et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 1261,
"end": 1282,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1482,
"end": 1500,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 748,
"end": 756,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bilingual Embeddings",
"sec_num": "4.2."
},
{
"text": "We present a new cross-lingual embedding training method for low resource languages, MultiSeg, that incorporates subword information (given as character n-grams, morphemes, or BPEs) during training from parallel corpora. The morphemes are obtained from a state-of-the-art unsupervised morphological segmentation approach. We show that it consistently performs better than the BiSkip baseline, including on word similarity, syntactical analogy and word translation tasks across all languages. Extrinsically, cross-lingual document classification scores also outperform BiSkip. Finally, qualitative results show that our approach is able to learn better word-representations espe-cially for morphologically related words in both source and target language. We plan to extend our technique to train on more than two languages from the same language family.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5."
},
{
"text": "MATERIAL is an acronym for Machine Translation for English Retrieval of Information in Any Language(Rubino, 2016) 2 https://github.com/vishalanand/MultiSeg",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "CLDC code is provided by the authors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We obtained similar graphs for other languages.forms of common nouns; and base, past and third person present tense forms of verbs. Accordingly, our representation is able to perform better for syntactical analogy questions where MultiSeg methods consistently outperform BiSkip in all of the categories. Among the MultiSeg representations, M ultiSeg CN performs the best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is based upon work supported by the Intelligence Advanced Research Projects Activity (IARPA) MA-TERIAL program, via contract FA8650-17-C-9117. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of IARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Building basic vocabulary across 40 languages",
"authors": [
{
"first": "J",
"middle": [],
"last": "Acs",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Pajkossy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kornai",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth Workshop on Building and Using Comparable Corpora",
"volume": "",
"issue": "",
"pages": "52--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Acs, J., Pajkossy, K., and Kornai, A. (2013). Building ba- sic vocabulary across 40 languages. In Proceedings of the Sixth Workshop on Building and Using Comparable Corpora, pages 52-58.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Massively multilingual word embeddings",
"authors": [
{
"first": "W",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ammar, W., Mulcaire, G., Tsvetkov, Y., Lample, G., Dyer, C., and Smith, N. A. (2016). Massively multilingual word embeddings. ArXiv, abs/1602.01925.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings",
"authors": [
{
"first": "M",
"middle": [],
"last": "Artetxe",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "789--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artetxe, M. et al. (2018). A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In ACL, pages 789-798.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Agirre",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artetxe, M., Labaka, G., and Agirre, E. (2018). Generaliz- ing and improving bilingual word embedding mappings with a multi-step framework of linear transformations.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
}
],
"year": 2017,
"venue": "TACL",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bojanowski, P. et al. (2017). Enriching word vectors with subword information. TACL, 5:135-146.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adapting word embeddings to new languages with morphological and phonological subword representations",
"authors": [
{
"first": "A",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "D",
"middle": [
"R"
],
"last": "Mortensen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3285--3295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chaudhary, A., Zhou, C., Levin, L., Neubig, G., Mortensen, D. R., and Carbonell, J. (2018). Adapting word embed- dings to new languages with morphological and phono- logical subword representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 3285-3295.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Morphological wordembeddings",
"authors": [
{
"first": "R",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1287--1292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cotterell, R. and Sch\u00fctze, H. (2015). Morphological word- embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, pages 1287-1292.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, pages 4171-4186.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A simple, fast, and effective reparameterization of ibm model 2",
"authors": [
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dyer, C., Chahuneau, V., and Smith, N. A. (2013). A sim- ple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatically tailoring unsupervised morphological segmentation to the language",
"authors": [
{
"first": "R",
"middle": [],
"last": "Eskander",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "78--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eskander, R., Rambow, O., and Muresan, S. (2018). Au- tomatically tailoring unsupervised morphological seg- mentation to the language. In Proceedings of the Fif- teenth Workshop on Computational Research in Phonet- ics, Phonology, and Morphology, pages 78-83.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised morphological segmentation for low-resource polysynthetic languages",
"authors": [
{
"first": "R",
"middle": [],
"last": "Eskander",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Klavans",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "189--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eskander, R., Klavans, J., and Muresan, S. (2019). Un- supervised morphological segmentation for low-resource polysynthetic languages. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 189-195.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "L",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 10th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "406--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G., and Ruppin, E. (2001). Placing search in context: The concept revisited. In Proceedings of the 10th International Conference on World Wide Web, pages 406-414.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "E",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, E., Socher, R., Manning, C., and Ng, A. (2012). Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th An- nual Meeting of the Association for Computational Lin- guistics, pages 873-882.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Inducing crosslingual distributed representations of words",
"authors": [
{
"first": "A",
"middle": [],
"last": "Klementiev",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Bhattarai",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klementiev, A., Titov, I., and Bhattarai, B. (2012). Induc- ing crosslingual distributed representations of words. In Proceedings of COLING 2012.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Europarl: A Parallel Corpus for Statistical Machine Translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "Conference Proceedings: the tenth Machine Translation Summit",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, P. (2005). Europarl: A Parallel Corpus for Statis- tical Machine Translation. In Conference Proceedings: the tenth Machine Translation Summit, pages 79-86.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Cross-lingual language model pretraining",
"authors": [
{
"first": "G",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lample, G. and Conneau, A. (2019). Cross-lingual lan- guage model pretraining. Advances in Neural Informa- tion Processing Systems (NeurIPS).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "G",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lample, G., Conneau, A., Denoyer, L., and Ranzato, M. (2018a). Unsupervised machine translation using mono- lingual corpora only. In International Conference on Learning Representations.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Word translation without parallel data",
"authors": [
{
"first": "G",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Gou",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lample, G., Conneau, A., Ranzato, M., Denoyer, L., and J\u00c3 c gou, H. (2018b). Word translation without paral- lel data. In International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Linguistic knowledge and transferability of contextual representations",
"authors": [
{
"first": "N",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1073--1094",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, N. F., Gardner, M., Belinkov, Y., Peters, M. E., and Smith, N. A. (2019). Linguistic knowledge and transfer- ability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, pages 1073-1094.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "T",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luong, T., Socher, R., and Manning, C. (2013). Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Con- ference on Computational Natural Language Learning, pages 104-113.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bilingual word representations with monolingual quality in mind",
"authors": [
{
"first": "T",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "151--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luong, T., Pham, H., and Manning, C. D. (2015a). Bilin- gual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bilingual word representations with monolingual quality in mind",
"authors": [
{
"first": "T",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "151--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luong, T., Pham, H., and Manning, C. D. (2015b). Bilin- gual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G., and Dean, J. (2013a). Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Infor- mation Processing Systems -Volume 2, NIPS'13, pages 3111-3119.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "W.-T",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Yih, W.-t., and Zweig, G. (2013b). Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 746- 751.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Iarpa material program",
"authors": [
{
"first": "C",
"middle": [],
"last": "Rubino",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rubino, C. (2016). Iarpa material program.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1599--1613",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schuster, T., Ram, O., Barzilay, R., and Globerson, A. (2019). Cross-lingual alignment of contextual word em- beddings, with applications to zero-shot dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Volume 1, pages 1599-1613.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sennrich, R., Haddow, B., and Birch, A. (2016). Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics, pages 1715-1725.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Cross-lingual models of word embeddings: An empirical comparison",
"authors": [
{
"first": "S",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Roth",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1661--1670",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Upadhyay, S., Faruqui, M., Dyer, C., and Roth, D. (2016). Cross-lingual models of word embeddings: An empirical comparison. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1661-1670.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Visualizing data using t-SNE",
"authors": [
{
"first": "L",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "van der Maaten, L. and Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bilingual word embeddings from non-parallel document-aligned data applied to bilingual lexicon induction",
"authors": [
{
"first": "I",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "M.-F",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "719--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vuli\u0107, I. and Moens, M.-F. (2015). Bilingual word embed- dings from non-parallel document-aligned data applied to bilingual lexicon induction. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Confer- ence on Natural Language Processing, pages 719-725.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(a) Training and Alignment Schema for word wi in language lj (b) MultiSeg model illustration for Morph All case Figure 1: MultiSeg Architecture Somali English Word",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Figure 3: t-SNE visualization of English-Swahili vectors",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "t-SNE visualization for English-Somali vectors",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "Word translation scores and coverage percentages for all languages",
"content": "<table><tr><td colspan=\"2\">Language English</td><td colspan=\"3\">Deu/Swa/Tgl/Som BiSkip MultiSegCN</td><td>MultiSegM</td><td colspan=\"2\">MultiSegM all MultiSegM BP E</td></tr><tr><td>German</td><td colspan=\"2\">correct correction berichtigung berichtigen</td><td/><td/><td>x</td><td>x</td><td>x x</td></tr><tr><td>Swahili</td><td>office officer</td><td>afisi afisa</td><td>x</td><td>x x</td><td>x</td><td>x</td><td>x</td></tr><tr><td>Tagalog</td><td>mine my</td><td>akin aking</td><td>x</td><td>x</td><td>x x</td><td>x</td><td>x</td></tr><tr><td>Somali</td><td colspan=\"2\">approve approving ansixiyay ansixinta</td><td/><td/><td>x</td><td>x</td><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"text": "",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"text": "show similar words related to the word done for Swahili and Tagalog respectively. It can be seen that MultiSeg CN learns better word representations than BiSkip by placing morphologically and semantically related words in both languages closer (done \u2212 nagawa, did \u2212 ginawa, doing \u2212 ginagawa). Similar graphs for Somali are provided in",
"content": "<table><tr><td>Figure 4: t-SNE visualization for English-Tagalog vectors</td></tr><tr><td>the examples for the words from the test bilingual dictio-</td></tr><tr><td>naries and their morphological variants and show whether</td></tr><tr><td>or not they are predicted correctly using each technique.</td></tr><tr><td>For all of the languages, BiSkip is only able to predict zero</td></tr><tr><td>or one form of the word correctly, whereas MultiSeg pre-</td></tr><tr><td>dict various forms of the words correctly in both English</td></tr><tr><td>and other languages.</td></tr><tr><td>Qualitatively, two-dimensional visualizations of cross-</td></tr><tr><td>lingual word vectors are produced using t-Distributed</td></tr><tr><td>Stochastic Neighbor Embedding (t-SNE) (van der Maaten</td></tr><tr><td>and Hinton, 2008) dimensionality reduction method. Fig-</td></tr><tr><td>ures 3 and 4</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF7": {
"text": "German-English Monolingual and Cross-lingual Evaluation Results",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF9": {
"text": "Monolingual English Evaluation of Low Resource Languages",
"content": "<table><tr><td>4. Related Work</td></tr><tr><td>4.1. Monolingual Morphological Embeddings</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}