ACL-OCL / Base_JSON /prefixW /json /W18 /W18-0311.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W18-0311",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:27:04.621951Z"
},
"title": "Phonologically Informed Edit Distance Algorithms for Word Alignment with Low-Resource Languages",
"authors": [
{
"first": "R",
"middle": [
"Thomas"
],
"last": "Mccoy",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": "",
"affiliation": {},
"email": "robert.frank@yale.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Edit distance is commonly used to relate cognates across languages. This technique is particularly relevant for the processing of lowresource languages because the sparse data from such a language can be significantly bolstered by connecting words in the lowresource language with cognates in a related, higher-resource language. We present three methods for weighting edit distance algorithms based on linguistic information. These methods base their penalties on (i) phonological features, (ii) distributional character embeddings, or (iii) differences between cognate words. We also introduce a novel method for evaluating edit distance through the task of low-resource word alignment by using editdistance neighbors in a high-resource pivot language to inform alignments from the lowresource language. At this task, the cognatebased scheme outperforms our other methods and the Levenshtein edit distance baseline, showing that NLP applications can benefit from information about cross-linguistic phonological patterns.",
"pdf_parse": {
"paper_id": "W18-0311",
"_pdf_hash": "",
"abstract": [
{
"text": "Edit distance is commonly used to relate cognates across languages. This technique is particularly relevant for the processing of lowresource languages because the sparse data from such a language can be significantly bolstered by connecting words in the lowresource language with cognates in a related, higher-resource language. We present three methods for weighting edit distance algorithms based on linguistic information. These methods base their penalties on (i) phonological features, (ii) distributional character embeddings, or (iii) differences between cognate words. We also introduce a novel method for evaluating edit distance through the task of low-resource word alignment by using editdistance neighbors in a high-resource pivot language to inform alignments from the lowresource language. At this task, the cognatebased scheme outperforms our other methods and the Levenshtein edit distance baseline, showing that NLP applications can benefit from information about cross-linguistic phonological patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many NLP techniques require large quantities of training data, which is a problem for low-resource languages (languages with little available data). Work on low-resource languages often focuses on tackling this low-data problem, such as by creating more data (Marton et al., 2009) , collecting more data from the Internet (Mendels et al., 2015) or from scholarly papers (Xia et al., 2016) , efficiently eliciting informative data (Probst et al., 2002) , or crowdsourcing the collection of corpora (Post et al., 2012) . One promising approach is to supplement the available data for a low-resource language with data from higher-resource languages, an approach which has been applied to tasks ranging from speech recognition (Thomas et al., 2012) to machine translation (Dholakia and Sarkar, 2014) .",
"cite_spans": [
{
"start": 259,
"end": 280,
"text": "(Marton et al., 2009)",
"ref_id": "BIBREF27"
},
{
"start": 322,
"end": 344,
"text": "(Mendels et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 370,
"end": 388,
"text": "(Xia et al., 2016)",
"ref_id": "BIBREF42"
},
{
"start": 430,
"end": 451,
"text": "(Probst et al., 2002)",
"ref_id": "BIBREF36"
},
{
"start": 497,
"end": 516,
"text": "(Post et al., 2012)",
"ref_id": "BIBREF35"
},
{
"start": 724,
"end": 745,
"text": "(Thomas et al., 2012)",
"ref_id": "BIBREF39"
},
{
"start": 769,
"end": 796,
"text": "(Dholakia and Sarkar, 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An open problem within this approach is finding the best way to map information from one language to another. When connecting related languages, a natural place to start is with cognates, and many works use edit distance for cognate detection (Simard et al., 1993; Barker and Sutcliffe, 2000; Koehn and Knight, 2000; Mann and Yarowsky, 2001; Inkpen et al., 2005; Bergsma and Kondrak, 2007; Munro and Manning, 2012) . Edit distance refers to the difference between two strings, and this paper explores several techniques for determining edit distance. Our baseline is the Levenshtein edit distance algorithm (Levenshtein, 1966; Wagner and Fischer, 1974) , and we introduce three novel edit distance algorithms, namely feature-based edit distance, char2vec-based edit distance, and cognatebased edit distance, and assess their performance at transferring information across languages using the task of low-resource cognate identification. Finally, we introduce a novel method for evaluating edit distance algorithms through the task of word alignment.",
"cite_spans": [
{
"start": 243,
"end": 264,
"text": "(Simard et al., 1993;",
"ref_id": "BIBREF38"
},
{
"start": 265,
"end": 292,
"text": "Barker and Sutcliffe, 2000;",
"ref_id": "BIBREF1"
},
{
"start": 293,
"end": 316,
"text": "Koehn and Knight, 2000;",
"ref_id": "BIBREF17"
},
{
"start": 317,
"end": 341,
"text": "Mann and Yarowsky, 2001;",
"ref_id": "BIBREF26"
},
{
"start": 342,
"end": 362,
"text": "Inkpen et al., 2005;",
"ref_id": "BIBREF13"
},
{
"start": 363,
"end": 389,
"text": "Bergsma and Kondrak, 2007;",
"ref_id": null
},
{
"start": 390,
"end": 414,
"text": "Munro and Manning, 2012)",
"ref_id": "BIBREF31"
},
{
"start": 607,
"end": 626,
"text": "(Levenshtein, 1966;",
"ref_id": "BIBREF23"
},
{
"start": 627,
"end": 652,
"text": "Wagner and Fischer, 1974)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related work Ristad and Yianilos (1998) first presented schemes for training weighted edit distances, and others such as Cotterell et al. (2014) have proposed modifications to this method. Weighted edit distance and other weighted schemes for computing string similarity such as point-wise mutual information have been used by several authors for cognate detection (Kondrak, 2001; Ciobanu and Dinu, 2014; J\u00e4ger and Sofroniev, 2016; J\u00e4ger et al., 2017) . This paper's novel contribution is to extend this technique to a low-resource setting. The prior work in edit-distance-based cognate detection has relied on phonetic transcriptions, information about word meaning, or hand-created lists of cognates on which to train a system. Here we investigate how to assign edit distance weights when no such information is available for one of the languages in question. Several prior systems have used word similarity to assess historical linguistic claims about language phylogeny (Kondrak, 2002; J\u00e4ger, 2013; List, 2013) , but here we follow the inverse strategy of using knowledge about language phylogeny to inform the determination of string similarity by compensating for the lack of resources about a language with information from closely related and better-resourced languages. An additional novel contribution of this paper is to propose a new technique for assessing string similarity metrics based on how much a given metric can improve performance on a practical NLP task.",
"cite_spans": [
{
"start": 15,
"end": 41,
"text": "Ristad and Yianilos (1998)",
"ref_id": "BIBREF37"
},
{
"start": 123,
"end": 146,
"text": "Cotterell et al. (2014)",
"ref_id": "BIBREF9"
},
{
"start": 367,
"end": 382,
"text": "(Kondrak, 2001;",
"ref_id": "BIBREF21"
},
{
"start": 383,
"end": 406,
"text": "Ciobanu and Dinu, 2014;",
"ref_id": "BIBREF7"
},
{
"start": 407,
"end": 433,
"text": "J\u00e4ger and Sofroniev, 2016;",
"ref_id": "BIBREF14"
},
{
"start": 434,
"end": 453,
"text": "J\u00e4ger et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 976,
"end": 991,
"text": "(Kondrak, 2002;",
"ref_id": "BIBREF22"
},
{
"start": 992,
"end": 1004,
"text": "J\u00e4ger, 2013;",
"ref_id": "BIBREF16"
},
{
"start": 1005,
"end": 1016,
"text": "List, 2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use four basic approaches to calculating edit distance, detailed in the following subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edit distance algorithms",
"sec_num": "3"
},
{
"text": "As a baseline, we use Levenshtein edit distance (Levenshtein, 1966; Wagner and Fischer, 1974) . Levenshtein edit distance focuses on three operations that can be performed on a string of characters: 1. Insertion: The insertion of a new character into the string.",
"cite_spans": [
{
"start": 48,
"end": 67,
"text": "(Levenshtein, 1966;",
"ref_id": "BIBREF23"
},
{
"start": 68,
"end": 93,
"text": "Wagner and Fischer, 1974)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Levenshtein edit distance",
"sec_num": "3.1"
},
{
"text": "The deletion of a character already present in the string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deletion:",
"sec_num": "2."
},
{
"text": "The substitution of some new character for a character already in the string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Substitution:",
"sec_num": "3."
},
{
"text": "The Levenshtein edit distance between two words w 1 and w 2 is defined as the minimum number of insertions and/or deletions and/or substitutions that must be made to transform w 1 into w 2 . Table 1 contains some examples of word pairs and the Levenshtein edit distance (dist L (w 1 , w 2 )) between them.",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Substitution:",
"sec_num": "3."
},
{
"text": "This section details two approaches to edit distance in which the basic aim is to alter the Levenshtein penalties based on the phonological properties of the characters involved. The assumption underlying this method is that, when two cognates differ in some of the phonemes they contain, they are likely to differ in phonologically sensible ways. For example, it is more likely that one cognate will contain a d where its partner contains a t than it is for one cognate to contain a d where the other contains a u. If this assumption is true, an edit distance algorithm that encodes some phonological information may be more successful at identifying cognates than the basic Levenshtein algorithm. Indeed, Kondrak (2002) showed that the ALINE system, which computes string similarity based on a sophisticated set of phonological features, performed better at cognate identification than the basic Levenshtein method did; however, this success does not necessarily extend to our current situation because, due to our assumption that we are in a low-resource environment, we use only orthographic representations of words, not phonetic transcriptions.",
"cite_spans": [
{
"start": 707,
"end": 721,
"text": "Kondrak (2002)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based edit distance",
"sec_num": "3.2"
},
{
"text": "In this model, the penalty for substituting a vowel for a consonant, or for substituting a consonant for a vowel, is greater than that for substituting a vowel for a vowel or for substituting a consonant for a consonant. Specifically, this model works almost exactly like the basic Levenshtein\u2212with a penalty of 1 for an insertion or a deletion or for a substitution that does not change a vowel to a consonant or vice versa\u2212but the penalty for substituting a vowel for a consonant (or vice versa) is 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vowel/consonant approach",
"sec_num": "3.2.1"
},
{
"text": "For this model, we assign a set of phonological features to each character and make the penalty for any operation equal to the number of features that change when that operation occurs. For example, substituting a d for a t incurs a penalty of 1 because a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More features",
"sec_num": "3.2.2"
},
{
"text": "w 1 w 2 dist L (w 1 , w 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More features",
"sec_num": "3.2.2"
},
{
"text": "Operations performed stephen king stephen hawking 3 insert(h), insert(a), insert(w) lemony snicket jiminy cricket 5 sub(l, j), sub(e, i), sub(o, i), sub(s, c), sub(n, r) jim carrey john kerry 6 sub(i,o), insert(h), sub(m, n), sub(c, k), sub(a, e), del(e) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More features",
"sec_num": "3.2.2"
},
{
"text": "The word2vec algorithm from Mikolov et al. (2013a) uses word distributions to train representations of words as vectors in high-dimensional vector space; such vector representations are called embeddings. Inspired by word2vec's success at creating semantically sensible embeddings for words, we apply the word2vec algorithm to characters in an attempt to create phonologically sensible embeddings for characters. We refer to this technique as char2vec. The char2vec algorithm begins by considering a window of a fixed size around every instance of character c in the training corpus (which, in this case, was the monolingual Portuguese data from the Europarl corpus; see Section 4 for more details). We tested windows of size 3 and 5. For word2vec, larger windows are typically used, but since there are far fewer characters than words, a smaller window size seemed sensible for the char2vec experiments because, unlike with word2vec, the cooccurrence vectors for char2vec are not at all sparse, so there is little need to look farther away from the target character to populate the cooccurrence vector. Once the desired windows around different occurrences of c were established, a neural network was used to generate the vector embedding for c. The neural network used was a simple feed-forward network with an input layer and an output layer both having dimensionality equal to the number of characters in the character set, and with a single hidden layer of dimensionality 16. The network was then trained using either the continuous bag of words (CBOW) method or the skip-gram method, both described in Mikolov et al. (2013a) . Once the network finished training, the trained weight matrix used to transition from the input layer to the hidden layer was used to generate the embeddings for all of the characters. Specifically, for the character at index i in the input vector, its embedding was row i of the weight matrix. An embedding was also created for the empty string by pretending that there was an between every two letters in the training data.",
"cite_spans": [
{
"start": 28,
"end": 50,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF29"
},
{
"start": 1608,
"end": 1630,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Char2vec edit distance",
"sec_num": "3.3"
},
{
"text": "Once these embeddings were trained, the char2vec edit distance between any two characters was defined as one divided by the cosine distance between the embeddings for those two characters, which is given by the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Char2vec edit distance",
"sec_num": "3.3"
},
{
"text": "(cosdist(c 1 , c 2 )) \u22121 = || c 1 || || c 2 || c 1 \u2022 c 2 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Char2vec edit distance",
"sec_num": "3.3"
},
{
"text": "where c 1 and c 2 are the vector embeddings of c 1 and c 2 . The negative exponent is there because the cosine is greater for more similar vectors, whereas we want a smaller penalty for more similar vectors. For insertions and deletions, the same equation is used, except that either c 1 (for insertions) or c 2 (for deletions) is because deleting a character can be thought of as replacing it with the empty string, and inserting a character can be thought of as replacing the empty string with that character. These embedding-based edit distances are founded upon two assumptions: First, as with the feature-based edit distance methods in Section 3.2, these methods assume that, across a pair of cognates, it is more likely for one character to substitute for a character phonologically similar to it than for a character that is not very phonologically similar to it. Secondly, the embedding-based methods make the further assumption that the distribution of a character can give an accurate portrayal of the character's phonological nature. Distributional facts certainly can shed light on the phonological properties of a speech sound; for example, Peperkamp et al. (2006) created an algorithm that was highly effective at determining which sounds were allophones vs. distinct phonemes based on the distributions of those sounds. Despite this success, it is not necessarily the case that distributional evidence is useful for cognate determination, since identifying allophones within a language might entail significantly different types of evidence than identifying cognates across languages.",
"cite_spans": [
{
"start": 1154,
"end": 1177,
"text": "Peperkamp et al. (2006)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Char2vec edit distance",
"sec_num": "3.3"
},
{
"text": "The embedding-based methods in the previous section all derive their embeddings from a single training language (in this case Portuguese). We now try to utilize cross-linguistic information from three Romance languages, namely Portuguese, Italian, and French. The idea behind this approach is to identify cognates amongst Portuguese, Italian, and French and to use those cognates to determine which phonological differences are likely to be present in Romance cognate pairs and which are not and to apply this information to the test language of Spanish (which is not used as a training language).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cognate-based edit distance",
"sec_num": "3.4"
},
{
"text": "The following criteria were used to generate training examples for this experiment; positive examples were identified by finding any pairs (w 1 , w 2 ) that satisfied criteria (1), (2), (3), and (4a), while negative examples were identified by finding any pairs (w 1 , w 2 ) that satisfied criteria (1), (2), (3), and (4b):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cognate-based edit distance",
"sec_num": "3.4"
},
{
"text": "1. w 1 and w 2 are from different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cognate-based edit distance",
"sec_num": "3.4"
},
{
"text": "w 2 is greater than 0 but less than some specified amount d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Levenshtein edit distance between w 1 and",
"sec_num": "2."
},
{
"text": "3. Both w 1 and w 2 are at least 4 characters long.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Levenshtein edit distance between w 1 and",
"sec_num": "2."
},
{
"text": "The most likely English translation of w 1 is the same as the most likely English translation of w 2 . (b) The cosine similarity between the GloVe embeddings (Pennington et al., 2014) of the most likely English translation of w 1 and w 2 is less than 0.5.",
"cite_spans": [
{
"start": 158,
"end": 183,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": "4."
},
{
"text": "For criterion (1), the languages considered were Portuguese, French, and Italian, which are all of the Romance languages (besides the test language of Spanish) considered in this paper. For criterion (2), we ran the experiments both with d = 1 and with d = 2. Criterion (3) is included because a low Levenshtein edit distance does not mean much for very short words\u2212for example, any two two-letter words will have an edit distance of at most 2, but this by no means implies that all two-letter words are cognates of each other. For criterion (4), the most likely English translation of a word is identified based on the IBM Model 1 (Brown et al., 1993) translation probabilities generated by running the mgiza word alignment program (Gao and Vogel, 2008) on the bilingual Portuguese/English, French/English, and Italian/English training sets. Finally, for criterion (4b), we used the GloVe embeddings from Pennington et al. (2014) as a metric for determining semantic similarity; words with a cosine similarity less than 0.5 tend not to be very semantically similar, so this criterion is intended to ensure that the negative examples are not cognates despite being phonologically similar, while criterion (4a) is meant to find positive examples by identifying words that appear phonologically similar and have similar meanings.",
"cite_spans": [
{
"start": 733,
"end": 754,
"text": "(Gao and Vogel, 2008)",
"ref_id": "BIBREF12"
},
{
"start": 906,
"end": 930,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": "4."
},
{
"text": "When d from criterion (2) was specified to be 1, this criteria generated 8,718 positive examples and 25,440 negative examples, while having d = 2 generated 27,744 positive examples and 448,746 negative examples. We restricted the number of negative examples to be equal to the number of positive examples in each case, so that there ended up being both 8,718 positive examples and 8,718 negative examples when d = 1 and 27,744 positive examples and 27,744 negative examples when d = 2. Table 2 shows some of the positive and negative example pairs generated when d = 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 486,
"end": 494,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "(a)",
"sec_num": "4."
},
{
"text": "These examples were used to train weights for each possible operation of insertion, deletion, or substitution. There are 27 characters for which to learn weights (the 26 letter plus an OTHER character 1 ); thus there are 27 2 possible substitutions that can be made. Because it makes sense for these edit distances to be symmetrical, it was deemed that the Word 1 Word 2 afgane (It.) afghane (Fr.) \"afghan\" \"afghan\" stupide (Fr.) stupido (It.) \"stupid\" \"stupid\" serviu (Port.) servi (Fr.) \"served\" \"served\" discriminata (It.) discriminada (Port.) \"against\" \"against\" finali (It.) finale (Fr.) \"final\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": "4."
},
{
"text": "\"final\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": "4."
},
{
"text": "Word 1 Word 2 colmando (It.) comando (Port.) \"closing\" \"command\" eternamente (It.) externamente (Port.) \"eternally\" \"externally\" monge (Port.) ronge (Fr.) \"monk\" \"plaguing\" paute (Port.) faute (Fr.) \"transparent\" \"fault\" mentis (It.) mentir (Fr.) \"mindset\" \"lie\" penalty for inserting a character should be the same as the penalty for deleting that character, so there were also 27 possible insertion/deletion operations. Thus, there are a total of 27 2 + 27 = 378 operations for which to learn weights. We used logistic regression to find the weights that performed best at classifying the training items as cognates or non-cognates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": "4."
},
{
"text": "The success of this approach depends on the assumption that the types of sound changes that occur between some pairs of languages within a language family are similar to the types of sound changes that occur between other pairs of languages in that language family. This assumption is not necessarily true; a language pair could easily have some systematic sound changes between its members that are not represented in any other language pairs. However, perhaps it is the case that there will be some broader trends that cut across many members of a family.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": "4."
},
{
"text": "For all experiments, Spanish is treated as if it is a low-resource language for which we wish to gain information based on its high-resource relatives of Portuguese, Italian, and French. Although Spanish is a very high-resource language in real life, for these experiments we simulate low-resourcedness by not providing the computer with any Spanish training data; thus, the test data is the computer's only exposure to Spanish. We chose this path rather than using a truly low-resource language because it is much easier to create gold standards for evaluation for a high-resource language. As in real life, Portuguese, Italian, and French are treated as high-resource (that is, there is ample training data for these languages), as is English, which is used in the word alignment experiments. Thus, the char2vec embeddings are trained on Portuguese data, and the cognate-based edit distances are trained on Portuguese, French, and Italian data. All experiments used the Europarl parallel corpus (Koehn, 2005) as sources of text in the languages of interest. This corpus comes from the proceedings of the European Parliament, a governing body of the European Union.",
"cite_spans": [
{
"start": 997,
"end": 1010,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setups",
"sec_num": "4"
},
{
"text": "Cognate identification was used as a direct method of testing how well each edit distance algorithm performed. To do this, a set of pairs of likely Spanish/Portuguese cognates was formed in the same way as the likely cognates were chosen for the cognate-based embeddings (see Section 3.4). Call such a Spanish/Portuguese pair (s, p). For each such pair, each edit distance algorithm was used to identify the Portuguese word p closest with the smallest edit distance from s (with ties being broken randomly). It was then checked whether p = p closest . A total of 12,198 cognate pairs were tested in this manner for each edit distance algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cognate identification",
"sec_num": "4.1"
},
{
"text": "Though cognate identification is the most direct method for evaluating each edit distance algorithm, it has several flaws. First, it requires the selection of exactly one Portuguese word as the cognate for a given Spanish word, when in reality there may be many valid Portuguese choices (such as other inflected forms of the intended Portuguese word). More generally, cognacy only really makes sense at the lemma level, but the low-resource setting of our task means that we do not have access to lemmatization and must use words instead of lemmas, making the cognate detection task ill-defined in this context. In addition, the fact that the method of selecting cognates so closely mirrors the steps for training cognate-based edit distance may unfairly advantage the cognate-based edit distance algorithm 2 . Therefore, as a fairer and more definitely quantifiable assessment, we use the task of word alignment. Word alignment is the task of, given two sentences that are translations of each other, determining which words correspond to each other semantically across the two languages. Word alignment is an important step in many machine translation systems, such as the popular Moses software system (Koehn et al., 2007) , and effective word alignment depends heavily upon the size of the alignment algorithm's training corpus. Therefore, success at word alignment (and, by extension, machine translation programs based on word alignment) suffers greatly under a data shortage. Cognate information can be used to combat this data shortage because, if a low-resource language is related to a high-resource language, educated guesses about the meanings of words in the low-resource language may be formed based on similar-looking words in the high-resource language. Presumably, an edit distance algorithm that more accurately identifies cognates will perform better at this sort of pivoting than an edit distance algorithm that does not perform as well at cognate identification. Multiple authors in the past have worked on exploiting language relatedness to assist in machine translation (Mikolov et al., 2013b; Zoph et al., 2016; Cheng et al., 2016) , including via a focus on using this information to improve word alignment performance (Xiang et al., 2010) .",
"cite_spans": [
{
"start": 1205,
"end": 1225,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 2093,
"end": 2116,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF30"
},
{
"start": 2117,
"end": 2135,
"text": "Zoph et al., 2016;",
"ref_id": "BIBREF44"
},
{
"start": 2136,
"end": 2155,
"text": "Cheng et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 2244,
"end": 2264,
"text": "(Xiang et al., 2010)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "4.2.1"
},
{
"text": "All word alignment experiments aim to align Spanish sentences with their English translations, with no Spanish-English training data available. Instead, a bilingual Portuguese-English training set of 1 million lowercased and tokenized sentences from the Europarl corpus is used to train Portuguese-English translation probabilities of the form t(e|p), where t(e|p) is the probability that a given Portuguese word p will be translated as the English word e. The training of these translation probabilities was accomplished using mgiza (Gao and Vogel, 2008) , which is an implementation of the IBM models of word alignment (Brown et al., 1993) . The test set for the word alignment experiments is a set of 1,000 lowercased and tokenized parallel Spanish/English sentences from the Europarl corpus. The test set also contains a gold standard set of alignments from the NAACL 2006 shared task on statistical machine translation, available at http://www.statmt.org/wmt06/shared-task/. These gold standard alignments were generated using automatic methods (the exact methods are not stated), so they can be expected to contain some errors, but for the purposes of low-resource NLP these errors are expected to be negligible.",
"cite_spans": [
{
"start": 534,
"end": 555,
"text": "(Gao and Vogel, 2008)",
"ref_id": "BIBREF12"
},
{
"start": 621,
"end": 641,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot-based alignment algorithm",
"sec_num": "4.2.2"
},
{
"text": "In order to pivot from Portuguese (the pivot language used for training) to Spanish (the lowresource language used for testing), we define the translation score 3 t(e|s) between English word e and Spanish word s as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot-based alignment algorithm",
"sec_num": "4.2.2"
},
{
"text": "t(e|s) = arg max p\u2208P t(e|p) ed(s, p) + \u03bbn (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot-based alignment algorithm",
"sec_num": "4.2.2"
},
{
"text": "where P is the set of all Portuguese words, ed(s, p) is the edit distance between s and p, and \u03bbn is the product of a smoothing factor \u03bb and the number of edit operations n, where \u03bb was optimized for each edit distance algorithm to scale the relative importance of t and ed. This definition encodes our assumption that similar-looking Spanish and Portuguese words (i.e., Spanish and Portuguese words with low edit distances) will have similar meanings. The Spanish and English sentences are then aligned based solely on this translation score (as in IBM Model 1), without reference to any of the properties such as distortion or fertility used in higher IBM models. The choice to only use translation probability was made for simplicity; since the focus of these experiments is on edit distance algorithms, not on alignment algorithms, it was simplest to use the most basic alignment algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot-based alignment algorithm",
"sec_num": "4.2.2"
},
{
"text": "In the typical instantiation of IBM Model 1, the choice of which target word to align with source word a is made by iterating through all words in the target sentence and finding which has the greatest translation probability for a. This pivot-based formalism adds another step: Now, for each Spanish word s, the choice of which word to align with is made by iterating through all of s's closest Portuguese edit distance neighbors, and for each of those iterating through all words in the English sentence, to find which pair of a Portuguese neighbor and an English word yields the highest translation probability, and then aligning with that English word. Because IBM Model 1 treats all word alignments as independent, the probability of a set a of word alignments that align English sentence e and Spanish sentence s can be maximized simply by maximizing the probabilities of each individual alignment between a Spanish word s and some English word e\u2212that is, by aligning each Spanish word s with the English word e that maximizes t(e|s).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot-based alignment algorithm",
"sec_num": "4.2.2"
},
{
"text": "An advantage of the word alignment task is that we can straightforwardly quantify the results via the Alignment Error Rate (AER), defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "AER = 1 \u2212 2P p + c",
"eq_num": "(3)"
}
],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "where P is the number of predicted alignments that are correct, c is the number of alignments in the gold standard, and p is the number of predicted alignments (where an alignment is defined as a connection between one Spanish word and one English word). AER falls within the range of 0 to 1, where it is best to be as close to 0 as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "6 Results and discussion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The results at the cognate identification task are reported in and Italian/Portuguese cognate pairs used to train the weights for edit distance operations, since d = 2 performed better than d = 1. This is likely because increasing the maximum edit distance between cognate pairs creates more pairs for the training set. Table 4 shows the results at the word alignment task. The first set of methods are included as baselines. Random refers to randomly aligning each Spanish word with one English word, while Diagonal refers to aligning the i th Spanish word with the i th English word for all i less than the minimum of the two sentences' lengths, so these two methods show how well an extremely naive model can perform. Meanwhile, fast-align is a state-of-the-art word alignment program from the cdec package (Dyer et al., 2010) and was trained directly on the Spanish-English section of Europarl; thus, its performance is indicative of the best performance that can reasonably be expected on this task in a high-resource setting. The second part of the table compares the various edit distance algorithms. To scale each algorithm's set of penalties to a reasonable range of penalty ratios, all algorithms were tested with each smoothing factor \u03bb in the set [0.01, 1, 5, 10, 50, 100] , where the smoothing factor in question was added to each penalty used in the calculation of a word pair's overall edit distance. Results are reported with the bestperforming smoothing factor for each algorithm.",
"cite_spans": [
{
"start": 810,
"end": 829,
"text": "(Dyer et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 1259,
"end": 1284,
"text": "[0.01, 1, 5, 10, 50, 100]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 320,
"end": 327,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Cognate identification",
"sec_num": "6.1"
},
{
"text": "The char2vec algorithm was tested both using the CBOW and the skip-gram methods from Mikolov et al. (2013a) as well as with window sizes of 3 and 5 (i.e., 1 character on either side of the target word and 2 characters on either side of the target word). CBOW and skip-gram performed comparably; because CBOW performed slightly better, results are reported with it. The window size of 3 performed significantly better than a window size of 5, so re- sults are reported with this window size. This difference is likely because there are so few characters in the alphabet that considering characters in a wider window ceases to uniquely characterize a given character, since pretty much any character can easily occur two letters away from pretty much any other character. For the cognate-based edit distance, as in Section 6.1, results are reported with d = 2. Part 2 of this table only uses IBM Model 1. To see whether more advanced models can improve performance, the third section of Table 4 uses basic implementations of the higher IBM models from Brown et al. (1993) and the HMM model from Vogel et al. (1996) . For each of these models, the parameters from training on Portuguese-English alignment were transferred directly to the Spanish-English case. Note that, in this section, IBM M1 really refers to using the IBM Model 1 algorithm with translation probabilities trained using IBM Model 4; this is why the IBM Model 1 performs better in part 3 of the table than in part 2, because part 2 only uses translation probabilities trained with IBM Model 1.",
"cite_spans": [
{
"start": 85,
"end": 107,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF29"
},
{
"start": 1050,
"end": 1069,
"text": "Brown et al. (1993)",
"ref_id": "BIBREF4"
},
{
"start": 1093,
"end": 1112,
"text": "Vogel et al. (1996)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 985,
"end": 992,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Word alignment",
"sec_num": "6.2"
},
{
"text": "Finally, the fourth part of Table 4 shows results with various pivot languages that vary in their level of relatedness to Spanish. Spanish itself is included in this section as a baseline for the best possible performance under the pivot-based framework.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Word alignment",
"sec_num": "6.2"
},
{
"text": "For word alignment, feature-based edit distance did not beat the Levenshtein baseline, while the vowelconsonant based edit distance led to a modest improvement. These results may arise from the fact that the only edit distance neighbors being considered are the words that actually occur in the Portuguese corpus, which means that creating a more phonologically-informed model might not do much good because all candidates are already phonologically well-formed. For example, the feature-based edit distance algorithm would strongly indicate that blanco and branco are more likely to be cognates than blanco and bkanco; but there is no real need to make this distinction since no word like bkanco will occur in the Portuguese corpus anyway.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.3"
},
{
"text": "The char2vec method also led to modest improvements; its performance may have been hindered by its assumption that distributional similarity implies phonological similarity, when in fact there are some reasons to suppose the contrary. For example, a language might have voicing assimilation of all conso-nants in a cluster. This would mean that [t] and [d] would never appear next to the same consonants as each other and would thus have quite different distributions, despite only differing in voicing.",
"cite_spans": [
{
"start": 353,
"end": 356,
"text": "[d]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.3"
},
{
"text": "Cognate-based edit distance had the strongest performance. Since it was only trained on cognate pairs using Romance languages other than Spanish, while it was tested on Spanish words, this result justifies our assumption that facts about the phonological relatedness of Spanish's relatives can also be used to learn useful information about Spanish. The cognate identification task corroborates the word alignment results in indicating that cognate-based edit distance is the best-performing algorithm for bootstrapping information from a high-resource language to one of its low-resource relatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.3"
},
{
"text": "The second part of Table 4 shows that three more linguistically informed algorithms presented here (particularly cognate-based edit distance) outperform the less-informed basic Levenshtein algorithm. These results broadly suggest that incorporating linguistic information can be of significant benefit to NLP applications with low-resource languages, since it was helpful here to utilize information about the phonological relatedness between languages rather than using the flat distribution of the basic Levenshtein algorithm. Though the alignment results in the second part of Table 4 are not impressive at an absolute level, the results in the third part of the table show that alignment performance can be significantly improved by preserving the same pivot-based setup but using more advanced alignment algorithms, so there is hope that refining the alignment algorithms more could further improve performance. Finally, the bottom segment of Table 4 shows that alignment performance generally improves as the pivot language becomes more closely related to Spanish, corroborating the claim that it is language relatedness that fuels the success of the pivot-based alignment method. (Figure 1 shows a family tree of the pivot languages used.)",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 580,
"end": 587,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 948,
"end": 956,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1188,
"end": 1197,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.3"
},
{
"text": "We have presented three new techniques for computing edit distance. All of these make use of more linguistic information (specifically, cross-linguistic phonological information) than the baseline of Lev- enshtein edit distance, and all of them perform at least as well as Levenshtein edit distance at lowresource cognate identification and word alignment. In particular, cognate-based edit distance brings the greatest performance improvements in these tasks compared to Levenshtein edit distance. This work focuses on the IBM alignment models, so future work could explore more advanced algorithms relating to word alignment and machine translation, such as neural machine translation (Collobert and Weston, 2008; Cho et al., 2014; Bahdanau et al., 2015) , or phrase-based machine translation (Koehn et al., 2003; Och and Ney, 2004) to bring alignment performance improvement. In addition, the edit distance algorithms could be refined to encode more phonological information. For example, separate penalties could be assigned based on the environment in which changes occur since environment is highly significant in phonological changes both within and across languages. Another refinement specific to the cognate-based algorithm would be training on a list of likely cognates to choose weights and then using those weights to choose an updated list of likely cognates, and iterating this process until the weights converge; the algorithm as presented here only represents one iteration of such a process, but further iterations might yield better weights.",
"cite_spans": [
{
"start": 687,
"end": 715,
"text": "(Collobert and Weston, 2008;",
"ref_id": "BIBREF8"
},
{
"start": 716,
"end": 733,
"text": "Cho et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 734,
"end": 756,
"text": "Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 795,
"end": 815,
"text": "(Koehn et al., 2003;",
"ref_id": "BIBREF18"
},
{
"start": 816,
"end": 834,
"text": "Och and Ney, 2004)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
},
{
"text": "Proceedings of the Society for Computation in Linguistics (SCiL) 2018, pages 102-112. Salt Lake City, Utah, January 4-7, 2018",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A character with a diacritic is represented as 2 characters, the base character plus a diacritic character that is collapsed into the OTHER category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "An anonymous reviewer notes that this second problem could be overcome by testing on human-generated cognate lists, which would be a useful metric to compute in future work. However, the other problem with the cognate task remains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We call it a score rather than a probability because we do not normalize, so the scores do not sum to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers and the members of Yale's 2017 Senior Essay in Linguistics class for their helpful suggestions on this work. Any errors remain our own.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. International Confer- ence on Learning Representations (ICLR 2015).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An experiment in the semi-automatic identification of falsecognates between english and polish",
"authors": [
{
"first": "Gosia",
"middle": [],
"last": "Barker",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Richard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutcliffe",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Irish Conference on Artificial Intelligence and Cognitive Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gosia Barker and Richard FE Sutcliffe. 2000. An ex- periment in the semi-automatic identification of false- cognates between english and polish. Proceedings of the Irish Conference on Artificial Intelligence and Cognitive Science.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Alignment-based discriminative string similarity. Annual meeting-Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "45",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alignment-based discriminative string similarity. An- nual meeting-Association for Computational Linguis- tics, 45.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational Linguistics, 19:263-311.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Neural machine translation with pivot languages",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qian",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.02201"
]
},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Yang Liu, Qian Yang, Maosong Sun, and Wei Xu. 2016. Neural machine translation with pivot languages. arXiv preprint arXiv:1604.02201.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "KyungHyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. International Conference on Learning Rep- resentations (ICLR 2013).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic detection of cognates using orthographic alignment",
"authors": [
{
"first": "Alina",
"middle": [
"Maria"
],
"last": "Ciobanu",
"suffix": ""
},
{
"first": "Liviu",
"middle": [
"P"
],
"last": "Dinu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "99--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alina Maria Ciobanu and Liviu P. Dinu. 2014. Auto- matic detection of cognates using orthographic align- ment. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 2: Short Papers, ACL 2014,, pages 99-105.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine Learning (ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: Deep neu- ral networks with multitask learning. Proceedings of the 25th International Conference on Machine Learn- ing (ICML 2008).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Stochastic contextual edit distance and probabilistic fsts",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "625--630",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2014. Stochastic contextual edit distance and probabilistic fsts. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 2: Short Papers, ACL 2014,, pages 625-630.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Pivot-based triangulation for low-resource languages",
"authors": [
{
"first": "Rohit",
"middle": [],
"last": "Dholakia",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit Dholakia and Anoop Sarkar. 2014. Pivot-based tri- angulation for low-resource languages. Proc. AMTA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "cdec: A decoder, alignment, and learning framework for finitestate and context-free translation models",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
},
{
"first": "Hendra",
"middle": [],
"last": "Setiawan",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Ferhan",
"middle": [],
"last": "Ture",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Eidelman",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Jonathan Weese, Hendra Setiawan, Adam Lopez, Ferhan Ture, Vladimir Eidelman, Juri Ganitke- vitch, Phil Blunsom, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite- state and context-free translation models. Proceedings of the ACL 2010 System Demonstrations.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parallel implementations of word alignment tool. Software engineering, testing, and quality assurance for natural language processing",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "5",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Gao and Stephan Vogel. 2008. Parallel implementa- tions of word alignment tool. Software engineering, testing, and quality assurance for natural language processing, 5:49-57.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic identification of cognates and false friends in french and english",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "Oana",
"middle": [],
"last": "Frunza",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Inkpen, Oana Frunza, and Grzegorz Kondrak. 2005. Automatic identification of cognates and false friends in french and english. Proceedings of the Inter- national Conference Recent Advances in Natural Lan- guage Processing, 9.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic cognate classification with a support vector machine",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Sofroniev",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 13th Conference on Natural Language Processing",
"volume": "16",
"issue": "",
"pages": "128--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger and Pavel Sofroniev. 2016. Automatic cognate classification with a support vector machine. Proceedings of the 13th Conference on Natural Lan- guage Processing, 16:128-134.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using support vector machines and state-of-theart algorithms for phonetic alignment to identify cognates in multi-lingual wordlists",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
},
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Sofroniev",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1204--1215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger, Johann-Mattis List, and Pavel Sofroniev. 2017. Using support vector machines and state-of-the- art algorithms for phonetic alignment to identify cog- nates in multi-lingual wordlists. Proceedings of the 15th Conference of the European Chapter of the Asso- ciation for Computational Linguistics: Volume 1, Long Papers, pages 1204-1215.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Phylogenetic inference from word lists using weighted alignment with empirically determined weights",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2013,
"venue": "Language Dynamics and Change",
"volume": "3",
"issue": "",
"pages": "245--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger. 2013. Phylogenetic inference from word lists using weighted alignment with empirically deter- mined weights. Language Dynamics and Change 3, 2:245-291.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Estimating word translation probabilities from unrelated monolingual corpora using the em algorithm",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Kevin Knight. 2000. Estimating word translation probabilities from unrelated monolingual corpora using the em algorithm. AAAI/IAAI.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. Proceed- ings of the 2003 Conference of the North American Chapter of the Association for Computational Linguis- tics on Human Language Technology, 1:48-54.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Moses open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions)",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses open source toolkit for statistical machine translation. Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions), pages 177-180.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "5",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. MT Summit, 5:79-86.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Identifying cognates by phonetic and semantic similarity",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics on Language Technologies",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2001. Identifying cognates by pho- netic and semantic similarity. Proceedings of the Sec- ond Meeting of the North American Chapter of the As- sociation for Computational Linguistics on Language Technologies, pages 1-8.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Algorithms for language reconstruction",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2002. Algorithms for language re- construction. Ph.D. thesis, University of Toronto.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Binary codes capable of correcting deletions, insertions, and reversals",
"authors": [
{
"first": "Vladimir",
"middle": [
"I"
],
"last": "Levenshtein",
"suffix": ""
}
],
"year": 1966,
"venue": "Soviet Physics Doklady",
"volume": "10",
"issue": "8",
"pages": "707--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8):707-710.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Sequence comparison in historical linguistics",
"authors": [
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johann-Mattis List. 2013. Sequence comparison in historical linguistics. Ph.D. thesis, Heinrich-Heine- Universit\u00e4t D\u00fcsseldorf.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multipath translation lexicon induction via bridge languages",
"authors": [
{
"first": "Gideon",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon S. Mann and David Yarowsky. 2001. Mul- tipath translation lexicon induction via bridge lan- guages. Proceedings of the second meeting of the North American Chapter of the Association for Com- putational Linguistics on Language technologies.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Improved statistical machine translation using monolingually-derived paraphrases",
"authors": [
{
"first": "Yuval",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuval Marton, Chris Callison-Burch, and Philip Resnik. 2009. Improved statistical machine translation using monolingually-derived paraphrases. Proceedings of the 2009 Conference on Empirical Methods in Natu- ral Language Processing, 1.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Improving speech recognition and keyword search for low resource languages using web data",
"authors": [
{
"first": "Gideon",
"middle": [],
"last": "Mendels",
"suffix": ""
},
{
"first": "Erica",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Mark",
"suffix": ""
},
{
"first": "Kate",
"middle": [
"M"
],
"last": "Gales",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Knill",
"suffix": ""
},
{
"first": "Haipeng",
"middle": [],
"last": "Ragni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon Mendels, Erica Cooper, Victor Soto, Julia Hirschberg, Mark JF Gales, Kate M. Knill, Anton Ragni, and Haipeng Wang. 2015. Improving speech recognition and keyword search for low resource lan- guages using web data. Sixteenth Annual Conference of the International Speech Communication Associa- tion.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. International Conference on Learning Representations (ICLR 2013).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Accurate unsupervised joint named-entity extraction from unaligned parallel text",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Munro",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 4th Named Entity Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Munro and Christopher D. Manning. 2012. Accurate unsupervised joint named-entity extraction from unaligned parallel text. Proceedings of the 4th Named Entity Workshop.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The alignment template approach to statistical machine translation. Computational Linguistics, 30",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "4",
"issue": "",
"pages": "417--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine transla- tion. Computational Linguistics, 30.4:417-449.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word rep- resentation.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The acquisition of allophonic rules: Statistical learning with linguistic constraints",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Peperkamp",
"suffix": ""
},
{
"first": "Rozenn",
"middle": [
"Le"
],
"last": "Calvez",
"suffix": ""
},
{
"first": "Jean-Pierre",
"middle": [],
"last": "Nadal",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2006,
"venue": "Cognition",
"volume": "101",
"issue": "",
"pages": "31--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Peperkamp, Rozenn Le Calvez, Jean-Pierre Nadal, and Emmanuel Dupoux. 2006. The acquisition of allophonic rules: Statistical learning with linguistic constraints. Cognition, 101.3:B31-B41.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Constructing parallel corpora for six indian languages via crowdsourcing",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post, Chris Callison-Burch, and Miles Osborne. 2012. Constructing parallel corpora for six indian lan- guages via crowdsourcing. Proceedings of the Seventh Workshop on Statistical Machine Translation.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Semi-automatic learning of transfer rules for machine translation of low-density languages. Proceedings of the Student Session at the 14th European Summer School in Logic",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Probst",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Probst, Jaime Carbonell, and Lori Levin. 2002. Semi-automatic learning of transfer rules for machine translation of low-density languages. Pro- ceedings of the Student Session at the 14th European Summer School in Logic, Language and Information (ESSLLI-02).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Learning string-edit distance",
"authors": [
{
"first": "Eric",
"middle": [
"Sven"
],
"last": "Ristad",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yianilos",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "20",
"issue": "",
"pages": "522--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Sven Ristad and Peter N. Yianilos. 1998. Learn- ing string-edit distance. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 5:522-532.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Using cognates to align sentences in bilingual corpora",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "George",
"middle": [
"F"
],
"last": "Foster",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Isabelle",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 1993 conference of the Centre for Advanced Studies on Collaborative research: distributed computing",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Simard, George F. Foster, and Pierre Isabelle. 1993. Using cognates to align sentences in bilin- gual corpora. Proceedings of the 1993 conference of the Centre for Advanced Studies on Collaborative re- search: distributed computing, 2.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Multilingual mlp features for low-resource lvcsr systems",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Sriram",
"middle": [],
"last": "Ganapathy",
"suffix": ""
},
{
"first": "Hynek",
"middle": [],
"last": "Hermansky",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Thomas, Sriram Ganapathy, and Hynek Herman- sky. 2012. Multilingual mlp features for low-resource lvcsr systems. 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Hmm-based word alignment in statistical translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th conference on computational linguistics",
"volume": "2",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word alignment in statistical trans- lation. Proceedings of the 16th conference on compu- tational linguistics, 2:836-841.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The string to string correction problem",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Fischer",
"suffix": ""
}
],
"year": 1974,
"venue": "Journal of the ACM (JACM)",
"volume": "21",
"issue": "1",
"pages": "168--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Wagner and Michael J. Fischer. 1974. The string to string correction problem. Journal of the ACM (JACM), 21.1:168-173.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Enriching a massively multilingual database of interlinear glossed text",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "William",
"middle": [
"D"
],
"last": "Lewis",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"Wayne"
],
"last": "Goodman",
"suffix": ""
},
{
"first": "Glenn",
"middle": [],
"last": "Slayden",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Georgi",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Crowgey",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
}
],
"year": 2016,
"venue": "Language Resources and Evaluation",
"volume": "50",
"issue": "",
"pages": "321--349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Xia, William D. Lewis, Michael Wayne Goodman, Glenn Slayden, Ryan Georgi, Joshua Crowgey, , and Emily M. Bender. 2016. Enriching a massively multi- lingual database of interlinear glossed text. Language Resources and Evaluation, 50:321-349.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Diversify and combine: Improving word alignment for machine translation on low-resource languages",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Yonggang",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 Conference Short Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Xiang, Yonggang Deng, and Bowen Zhou. 2010. Diversify and combine: Improving word alignment for machine translation on low-resource languages. Pro- ceedings of the ACL 2010 Conference Short Papers.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Transfer learning for lowresource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.02201"
]
},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low- resource neural machine translation. arXiv preprint arXiv:1604.02201.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Family tree of the pivot languages used; language groups were derived from Ethnologue(Lewis et al., 2009).",
"num": null
},
"TABREF0": {
"content": "<table><tr><td>single feature (namely, voicing) has changed, while</td></tr><tr><td>substituting a b for a t incurs a penalty of 2 because</td></tr><tr><td>two features (voicing and place) have changed. In</td></tr><tr><td>practice, it is impossible to enact this approach rigor-</td></tr><tr><td>ously because orthography does not map cleanly to</td></tr><tr><td>phonology and does not have consistent phonolog-</td></tr><tr><td>ical properties across languages. Therefore, many</td></tr><tr><td>of the weights used for this method are by necessity</td></tr><tr><td>somewhat arbitrary because the set of phonological</td></tr><tr><td>features that we assign to each character does not</td></tr><tr><td>necessarily match that character's true features in all</td></tr><tr><td>contexts and across all languages.</td></tr></table>",
"text": "Examples of Levenshtein edit distance.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF1": {
"content": "<table/>",
"text": "Some of the examples used for training the cognate-based edit distance. The table on the left shows positive examples (pairs deemed to be cognates), while the table on the right shows negative examples (pairs deemed not to be cognates).",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table><tr><td>. The cognate-based results are re-</td></tr><tr><td>ported with d = 2, where d is the maximum edit dis-</td></tr><tr><td>tance between French/Italian, French/Portuguese,</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table/>",
"text": "Results on cognate identification.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table/>",
"text": "Smoothed word alignment results for various experimental settings.",
"type_str": "table",
"html": null,
"num": null
}
}
}
}