ACL-OCL / Base_JSON /prefixP /json /P11 /P11-1044.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P11-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:48:25.230813Z"
},
"title": "An Algorithm for Unsupervised Transliteration Mining with an Application to Word Alignment",
"authors": [
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {}
},
"email": "sajjad@ims.uni-stuttgart.de"
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {}
},
"email": "fraser@ims.uni-stuttgart.de"
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {}
},
"email": "schmid@ims.uni-stuttgart.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a language-independent method for the automatic extraction of transliteration pairs from parallel corpora. In contrast to previous work, our method uses no form of supervision, and does not require linguistically informed preprocessing. We conduct experiments on data sets from the NEWS 2010 shared task on transliteration mining and achieve an F-measure of up to 92%, outperforming most of the semi-supervised systems that were submitted. We also apply our method to English/Hindi and English/Arabic parallel corpora and compare the results with manually built gold standards which mark transliterated word pairs. Finally, we integrate the transliteration module into the GIZA++ word aligner and evaluate it on two word alignment tasks achieving improvements in both precision and recall measured against gold standard word alignments.",
"pdf_parse": {
"paper_id": "P11-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a language-independent method for the automatic extraction of transliteration pairs from parallel corpora. In contrast to previous work, our method uses no form of supervision, and does not require linguistically informed preprocessing. We conduct experiments on data sets from the NEWS 2010 shared task on transliteration mining and achieve an F-measure of up to 92%, outperforming most of the semi-supervised systems that were submitted. We also apply our method to English/Hindi and English/Arabic parallel corpora and compare the results with manually built gold standards which mark transliterated word pairs. Finally, we integrate the transliteration module into the GIZA++ word aligner and evaluate it on two word alignment tasks achieving improvements in both precision and recall measured against gold standard word alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most previous methods for building transliteration systems were supervised, requiring either handcrafted rules or a clean list of transliteration pairs, both of which are expensive to create. Such resources are also not applicable to other language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we show that it is possible to extract transliteration pairs from a parallel corpus using an unsupervised method. We first align a bilingual corpus at the word level using GIZA++ and create a list of word pairs containing a mix of nontransliterations and transliterations. We train a sta-tistical transliterator on the list of word pairs. We then filter out a few word pairs (those which have the lowest transliteration probabilities according to the trained transliteration system) which are likely to be non-transliterations. We retrain the transliterator on the filtered data set. This process is iterated, filtering out more and more non-transliteration pairs until a nearly clean list of transliteration word pairs is left. The optimal number of iterations is automatically determined by a novel stopping criterion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We compare our unsupervised transliteration mining method with the semi-supervised systems presented at the NEWS 2010 shared task on transliteration mining (Kumaran et al., 2010) using four language pairs. We refer to this task as NEWS10. These systems used a manually labelled set of data for initial supervised training, which means that they are semi-supervised systems. In contrast, our system is fully unsupervised. We achieve an Fmeasure of up to 92% outperforming most of the semi-supervised systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The NEWS10 data sets are extracted Wikipedia InterLanguage Links (WIL) which consist of parallel phrases, whereas a parallel corpus consists of parallel sentences. Transliteration mining on the WIL data sets is easier due to a higher percentage of transliterations than in parallel corpora. We also do experiments on parallel corpora for two language pairs. To this end, we created gold standards in which sampled word pairs are annotated as either transliterations or non-transliterations. These gold standards have been submitted with the paper as supplementary material as they are available to the research community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally we integrate a transliteration module into the GIZA++ word aligner and show that it improves word alignment quality. The transliteration module is trained on the transliteration pairs which our mining method extracts from the parallel corpora. We evaluate our word alignment system on two language pairs using gold standard word alignments and achieve improvements of 10% and 13.5% in precision and 3.5% and 13.5% in recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. In section 2, we describe the filtering model and the transliteration model. In section 3, we present our iterative transliteration mining algorithm and an algorithm which computes a stopping criterion for the mining algorithm. Section 4 describes the evaluation of our mining method through both gold standard evaluation and through using it to improve word alignment quality. In section 5, we present previous work and we conclude in section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our algorithms use two different models. The first model is a joint character sequence model which we apply to transliteration mining. We use the grapheme-to-phoneme converter g2p to implement this model. The other model is a standard phrasebased MT model which we apply to transliteration (as opposed to transliteration mining). We build it using the Moses toolkit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "2"
},
{
"text": "Here, we briefly describe g2p using notation from Bisani and Ney (2008) . The details of the model, its parameters and the utilized smoothing techniques can be found in Bisani and Ney (2008) .",
"cite_spans": [
{
"start": 50,
"end": 71,
"text": "Bisani and Ney (2008)",
"ref_id": "BIBREF0"
},
{
"start": 169,
"end": 190,
"text": "Bisani and Ney (2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sequence Model Using g2p",
"sec_num": "2.1"
},
{
"text": "The training data is a list of word pairs (a source word and its presumed transliteration) extracted from a word-aligned parallel corpus. g2p builds a joint sequence model on the character sequences of the word pairs and infers m-to-n alignments between source and target characters with Expectation Maximization (EM) training. The m-to-n character alignment units are referred to as \"multigrams\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sequence Model Using g2p",
"sec_num": "2.1"
},
{
"text": "The model built on multigrams consisting of source and target character sequences greater than one learns too much noise (non-transliteration information) from the training data and performs poorly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sequence Model Using g2p",
"sec_num": "2.1"
},
{
"text": "In our experiments, we use multigrams with a maximum of one character on the source and one character on the target side (i.e., 0,1-to-0,1 character alignment units).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sequence Model Using g2p",
"sec_num": "2.1"
},
{
"text": "The N-gram approximation of the joint probability can be defined in terms of multigrams q i as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sequence Model Using g2p",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(q k 1 ) \u2248 k+1 j=1 p(q j |q j\u22121 j\u2212N +1 )",
"eq_num": "(1)"
}
],
"section": "Joint Sequence Model Using g2p",
"sec_num": "2.1"
},
{
"text": "where q 0 , q k+1 are set to a special boundary symbol. N-gram models of order > 1 did not work well because these models tended to learn noise (information from non-transliteration pairs) in the training data. For our experiments, we only trained g2p with the unigram model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sequence Model Using g2p",
"sec_num": "2.1"
},
{
"text": "In test mode, we look for the best sequence of multigrams given a fixed source and target string and return the probability of this sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sequence Model Using g2p",
"sec_num": "2.1"
},
{
"text": "For the mining process, we trained g2p on lists containing both transliteration pairs and nontransliteration pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sequence Model Using g2p",
"sec_num": "2.1"
},
{
"text": "We build a phrase-based MT system for transliteration using the Moses toolkit (Koehn et al., 2003) . We also tried using g2p for implementing the transliteration decoder but found Moses to perform better. Moses has the advantage of using Minimum Error Rate Training (MERT) which optimizes transliteration accuracy rather than the likelihood of the training data as g2p does. The training data contains more non-transliteration pairs than transliteration pairs. We don't want to maximize the likelihood of the non-transliteration pairs. Instead we want to optimize the transliteration performance for test data. Secondly, it is easy to use a large language model (LM) with Moses. We build the LM on the target word types in the data to be filtered.",
"cite_spans": [
{
"start": 78,
"end": 98,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Transliteration System",
"sec_num": "2.2"
},
{
"text": "For training Moses as a transliteration system, we treat each word pair as if it were a parallel sentence, by putting spaces between the characters of each word. The model is built with the default settings of the Moses toolkit. The distortion limit \"d\" is set to zero (no reordering). The LM is implemented as a five-gram model using the SRILM-Toolkit (Stolcke, 2002) , with Add-1 smoothing for unigrams and Kneser-Ney smoothing for higher n-grams.",
"cite_spans": [
{
"start": 353,
"end": 368,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Transliteration System",
"sec_num": "2.2"
},
{
"text": "Training of a supervised transliteration system requires a list of transliteration pairs which is expensive to create. Such lists are usually either built manually or extracted using a classifier trained on manually labelled data and using other language dependent information. In this section, we present an iterative method for the extraction of transliteration pairs from parallel corpora which is fully unsupervised and language pair independent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of Transliteration Pairs",
"sec_num": "3"
},
{
"text": "Initially, we extract a list of word pairs from a word-aligned parallel corpus using GIZA++. The extracted word pairs are either transliterations, other kinds of translations, or misalignments. In each iteration, we first train g2p on the list of word pairs. Then we delete those 5% of the (remaining) training data which are least likely to be transliterations according to g2p. 1 We determine the best iteration according to our stopping criterion and return the filtered data set from this iteration. The stopping criterion uses unlabelled held-out data to predict the optimal stopping point. The following sections describe the transliteration mining method in detail.",
"cite_spans": [
{
"start": 380,
"end": 381,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of Transliteration Pairs",
"sec_num": "3"
},
{
"text": "We will first describe the iterative filtering algorithm (Algorithm 1) and then the algorithm for the stopping criterion (Algorithm 2). In practice, we first run Algorithm 2 for 100 iterations to determine the best number of iterations. Then, we run Algorithm 1 for that many iterations. Initially, the parallel corpus is word-aligned using GIZA++ (Och and Ney, 2003) , and the alignments are refined using the grow-diag-final-and heuristic (Koehn et al., 2003) . We extract all word pairs which occur as 1-to-1 alignments in the word-aligned corpus. We ignore non-1-to-1 alignments because they are less likely to be transliterations for most language pairs. The extracted set of word pairs will be called \"list of word pairs\" later on. We use the list of word pairs as the training data for Algorithm 1.",
"cite_spans": [
{
"start": 348,
"end": 367,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF13"
},
{
"start": 441,
"end": 461,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "Algorithm 1 builds a joint sequence model using g2p on the training data and computes the joint probability of all word pairs according to g2p. We normalize the probabilities by taking the nth square root Build a joint source channel model on the training data using g2p and compute the joint probability of every word pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "Remove the 5% word pairs with the lowest lengthnormalized probability from the training data. {and repeat the process with the filtered training data} 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5:",
"sec_num": null
},
{
"text": "I \u2190 I+1 7: until I = Stopping iteration from Algorithm 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5:",
"sec_num": null
},
{
"text": "where n is the average length of the source and the target string. The training data contains mostly nontransliteration pairs and a few transliteration pairs. Therefore the training data is initially very noisy and the joint sequence model is not very accurate. However it can successfully be used to eliminate a few word pairs which are very unlikely to be transliterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5:",
"sec_num": null
},
{
"text": "On the filtered training data, we can train a model which is slightly better than the previous model. Using this improved model, we can eliminate further non-transliterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5:",
"sec_num": null
},
{
"text": "Our results show that at the iteration determined by our stopping criterion, the filtered set mostly contains transliterations and only a small number of transliterations have been mistakenly eliminated (see section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5:",
"sec_num": null
},
{
"text": "Algorithm 2 automatically determines the best stopping point of the iterative transliteration mining process. It is an extension of Algorithm 1. It runs the iterative process of Algorithm 1 on half of the list of word pairs (training data) for 100 iterations. For every iteration, it builds a transliteration system on the filtered data. The transliteration system is tested on the source side of the other half of the list of word pairs (held-out). The output of the transliteration system is matched against the target side of the held-out data. (These target words are either transliterations, translations or misalignments.) We match the target side of the held-out data under the assumption that all matches are transliterations. The iteration where the output of the transliteration system best matches the held-out data is chosen as the stopping iteration of Algorithm 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5:",
"sec_num": null
},
{
"text": "Algorithm 2 Selection of the stopping iteration for the transliteration mining algorithm 1: Create clusters of word pairs from the list of word pairs which have a common prefix of length 2 both on the source and target language side. 2: Randomly add each cluster either to the training data or to the held-out data. 3: I \u2190 0 4: while I < 100 do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5:",
"sec_num": null
},
{
"text": "Build a joint sequence model on the training data using g2p and compute the length-normalized joint probability of every word pair in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5:",
"sec_num": null
},
{
"text": "Remove the 5% word pairs with the lowest probability from the training data. {The training data will be reduced by 5% of the rest in each iteration} 7: Build a transliteration system on the filtered training data and test it using the source side of the held-out and match the output against the target side of the held-out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "I \u2190 I+1 9: end while 10: Collect statistics of the matching results and take the median from 9 consecutive iterations (median9). 11: Choose the iteration with the best median9 score for the transliteration mining process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "We will now describe Algorithm 2 in detail. Algorithm 2 initially splits the word pairs into training and held-out data. This could be done randomly, but it turns out that this does not work well for some tasks. The reason is that the parallel corpus contains inflectional variants of the same word. If two variants are distributed over training and held-out data, then the one in the training data may cause the transliteration system to produce a correct translation (but not transliteration) of its variant in the heldout data. This problem is further discussed in section 4.2.2. Instead of randomly splitting the data, we first create clusters of word pairs which have a common prefix of length 2 both on the source and target language side. We randomly add each cluster either to the training data or to the held-out data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "We repeat the mining process (described in Algorithm 1) to eliminate non-transliteration pairs from the training data. For each iteration of Algorithm 2, i.e., steps 4 to 9, we build a transliteration system on the filtered training data and test it on the source side of the held-out. We collect statistics on how well the output of the system matches the target side of the held-out. The matching scores on the held-out data often make large jumps from iteration to iteration. We take the median of the results from 9 consecutive iterations (the 4 iterations before, the current and the 4 iterations after the current iteration) to smooth the scores. We call this median9. We choose the iteration with the best smoothed score as the stopping point for the filtering process. In our tests, the me-dian9 heuristic indicated an iteration close to the optimal iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "Sometimes several nearby iterations have the same maximal smoothed score. In that case, we choose the one with the highest unsmoothed score. Section 4.2 explains the median9 heuristic in more detail and presents experimental results showing that it works well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "8:",
"sec_num": null
},
{
"text": "We evaluate our transliteration mining algorithm on three tasks: transliteration mining from Wikipedia InterLanguage Links, transliteration mining from parallel corpora, and word alignment using a word aligner with a transliteration component. On the WIL data sets, we compare our fully unsupervised system with the semi-supervised systems presented at the NEWS10 (Kumaran et al., 2010). In the evaluation on parallel corpora, we compare our mining results with a manually built gold standard in which each word pair is either marked as a transliteration or as a non-transliteration. In the word alignment experiment, we integrate a transliteration module which is trained on the transliterations pairs extracted by our method into a word aligner and show a significant improvement. The following sections describe the experiments in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We Table 1 : Summary of results on NEWS10 data sets where \"EA\" is English/Arabic, \"ET\" is English/Tamil and \"EH\" is English/Hindi. \"Our\" shows the F-measure of our filtered data against the gold standard using the supplied evaluation tool, \"Systems\" is the total number of participants in the subtask, and \"Rank\" is the rank we would have obtained if our system had participated.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments Using Parallel Phrases of Wikipedia InterLanguage Links",
"sec_num": "4.1"
},
{
"text": "contain training data, seed data and reference data. We make no use of the seed data since our system is fully unsupervised. We calculate the F-measure of our filtered transliteration pairs against the supplied gold standard using the supplied evaluation tool. For English/Arabic, English/Hindi and English/Tamil, our system is better than most of the semi-supervised systems presented at the NEWS 2010 shared task for transliteration mining. Table 1 summarizes the F-scores on these data sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 450,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments Using Parallel Phrases of Wikipedia InterLanguage Links",
"sec_num": "4.1"
},
{
"text": "On the English/Russian data set, our system achieves 76% F-measure which is not good compared with the systems that participated in the shared task. The English/Russian corpus contains many cognates which -according to the NEWS10 definition -are not transliterations of each other. Our system learns the cognates in the training data and extracts them as transliterations (see Table 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 377,
"end": 384,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments Using Parallel Phrases of Wikipedia InterLanguage Links",
"sec_num": "4.1"
},
{
"text": "The two best teams on the English/Russian task presented various extraction methods (Jiampojamarn et al., 2010; Darwish, 2010) . Their systems behave differently on English/Russian than on other language pairs. Their best systems for English/Russian are only trained on the seed data and the use of unlabelled data does not help the performance. Since our system is fully unsupervised, and the unlabelled data is not useful, we perform badly.",
"cite_spans": [
{
"start": 84,
"end": 111,
"text": "(Jiampojamarn et al., 2010;",
"ref_id": "BIBREF6"
},
{
"start": 112,
"end": 126,
"text": "Darwish, 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments Using Parallel Phrases of Wikipedia InterLanguage Links",
"sec_num": "4.1"
},
{
"text": "The Wikipedia InterLanguage Links shared task data contains a much larger proportion of transliterations than a parallel corpus. In order to examine how well our method performs on parallel corpora, we apply it to parallel corpora of English/Hindi and English/Arabic, and compare the transliteration mining results with a gold standard. We use the English/Hindi corpus from the shared task on word alignment, organized as part of the ACL 2005 Workshop on Building and Using Parallel Texts (WA05) (Martin et al., 2005) . For English/Arabic, we use a freely available parallel corpus from the United Nations (UN) (Eisele and Chen, 2010) . We randomly take 200,000 parallel sentences from the UN corpus of the year 2000. We create gold standards for both language pairs by randomly selecting a few thousand word pairs from the lists of word pairs extracted from the two corpora. We manually tag them as either transliterations or non-transliterations. The English/Hindi gold standard contains 180 transliteration pairs and 2084 non-transliteration pairs and the English/Arabic gold standard contains 288 transliteration pairs and 6639 non-transliteration pairs. We have submitted these gold standards with the paper. They are available to the research community.",
"cite_spans": [
{
"start": 496,
"end": 517,
"text": "(Martin et al., 2005)",
"ref_id": "BIBREF10"
},
{
"start": 611,
"end": 634,
"text": "(Eisele and Chen, 2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments Using Parallel Corpora",
"sec_num": "4.2"
},
{
"text": "In the following sections, we describe the me-dian9 heuristic and the splitting method of Algorithm 2. The splitting method is used to avoid early peaks in the held-out statistics, and the median9 heuristic smooths the held-out statistics in order to obtain a single peak. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments Using Parallel Corpora",
"sec_num": "4.2"
},
{
"text": "Algorithm 2 collects statistics from the held-out data (step 10) and selects the stopping iteration. Due to the noise in the held-out data, the transliteration accuracy on the held-out data often jumps from iteration to iteration. The dotted line in figure 1 (right) shows the held-out prediction accuracy for the En-glish/Hindi parallel corpus. The curve is very noisy and has two peaks. It is difficult to see the effect of the filtering. We take the median of the results from 9 consecutive iterations to smooth the scores. The solid line in figure 1 (right) shows a smoothed curve built using the median9 held-out scores. A comparison with the gold standard (section 4.2.3) shows that the stopping point (peak) reached using the median9 heuristic is better than the stopping point obtained with unsmoothed scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 266,
"text": "figure 1 (right)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation for Median9 Heuristic",
"sec_num": "4.2.1"
},
{
"text": "Algorithm 2 initially splits the list of word pairs into training and held-out data. A random split worked well for the WIL data, but failed on the parallel corpora. The reason is that parallel corpora contain inflectional variants of the same word. If these variants are randomly distributed over training and heldout data, then a non-transliteration word pair such as the English-Hindi pair \"change -badlao\" may end up in the training data and the related pair \"changes -badlao\" in the held-out data. The Moses system used for transliteration will learn to \"transliterate\" (or actually translate) \"change\" to \"badlao\". From other examples, it will learn that a final \"s\" can be dropped. As a consequence, the Moses transliterator may produce the non-transliteration \"badlao\" for the English word \"changes\" in the held-out data. Such matching predictions of the transliterator which are actually translations lead to an overestimate of the transliteration accuracy and may cause Algorithm 2 to predict a stopping iteration which is too early.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation for Splitting Method",
"sec_num": "4.2.2"
},
{
"text": "By splitting the list of word pairs in such a way that inflectional variants of a word are placed either in the training data, or in the held-out, but not in both, this problem can be solved. 4 The left graph in Figure 1 shows that the median9 held-out statistics obtained after a random data split of a Hindi/English corpus contains two peaks which occur too early. These peaks disappear in the right graph of Figure 1 which shows the results obtained after a split with the clustering method.",
"cite_spans": [
{
"start": 192,
"end": 193,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 212,
"end": 220,
"text": "Figure 1",
"ref_id": null
},
{
"start": 411,
"end": 419,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation for Splitting Method",
"sec_num": "4.2.2"
},
{
"text": "The overall trend of the smoothed curve in figure 1 (right) is very clear. We start by filtering out non-transliteration pairs from the data, so the results Figure 1: Statistics of held-out prediction of English/Hindi data using modified Algorithm 2 with random division of the list of word pairs (left) and using Algorithm 2 (right). The dotted line shows unsmoothed heldout scores and solid line shows median9 held-out scores of the transliteration system go up. When no more non-transliteration pairs are left, we start filtering out transliteration pairs and the results of the system go down. We use this stopping criterion for all language pairs and achieve consistently good results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation for Splitting Method",
"sec_num": "4.2.2"
},
{
"text": "According to the gold standard, the English/Hindi and English/Arabic data sets contain 8% and 4% transliteration pairs respectively. We repeat the same mining procedure -run Algorithm 2 up to 100 iterations and return the stopping iteration. Then, we run Algorithm 1 up to the stopping iteration returned by Algorithm 2 and obtain the filtered data. Table 3 shows the mining results on the English/Hindi and English/Arabic corpora. The gold standard is a subset of the data sets. The English/Hindi gold standard contains 180 transliteration pairs and 2084 non-transliteration pairs. The English/Arabic gold standard contains 288 transliteration pairs and 6639 non-transliteration pairs. From the English/Hindi data, the mining system has mined 170 transliteration pairs out of 180 transliteration pairs. The English/Arabic mined data contains 197 transliteration pairs out of 288 transliteration pairs. The mining system has wrongly identified a few non-transliteration pairs as transliterations (see table 3, last column). Most of these word pairs are close transliterations and differ by only one or two characters from perfect transliteration pairs. The close transliteration pairs provide many valid multigrams which may be helpful for the mining system.",
"cite_spans": [],
"ref_spans": [
{
"start": 350,
"end": 357,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results on Parallel Corpora",
"sec_num": "4.2.3"
},
{
"text": "In the previous section, we presented a method for the extraction of transliteration pairs from a parallel corpus. In this section, we will explain how to build a transliteration module on the extracted transliteration pairs and how to integrate it into MGIZA++ (Gao and Vogel, 2008) by interpolating it with the ttable probabilities of the IBM models and the HMM model. MGIZA++ is an extension of GIZA++. It has the ability to resume training from any model rather than starting with Model1.",
"cite_spans": [
{
"start": 262,
"end": 283,
"text": "(Gao and Vogel, 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Integration into Word Alignment Model",
"sec_num": "4.3"
},
{
"text": "GIZA++ applies the IBM models (Brown et al., 1993 ) and the HMM model (Vogel et al., 1996) in both directions, i.e., source to target and target to source. The alignments are refined using the grow-diag-final-and heuristic (Koehn et al., 2003) . GIZA++ generates a list of translation pairs with alignment probabilities, which is called the t-table.",
"cite_spans": [
{
"start": 30,
"end": 49,
"text": "(Brown et al., 1993",
"ref_id": "BIBREF1"
},
{
"start": 70,
"end": 90,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF18"
},
{
"start": 223,
"end": 243,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "In this section, we propose a method to modify the translation probabilities of the t-table by interpolating the translation counts with transliteration counts. The interpolation is done in both directions. In the following, we will only consider the e-to-f direction. The transliteration module which is used to calculate the conditional transliteration probability is described in Algorithm 3. We build a transliteration system by training Moses on the filtered transliteration corpus (using Algorithm 1) and apply it to the e side of the list of word pairs. For every source word, we generate the list of 10-best transliterations nbestT I(e). Then, we extract every f that cooccurs with e in a parallel sentence and add it to nbestT I(e) which gives us the list of candidate transliteration pairs candidateT I(e). We use the sum of transliteration probabilities f \u2208CandidateT I(e) p moses (f , e) as an approximation for the prior probability p moses (e) = f p moses (f , e) which is needed to convert the joint transliteration probability into a conditional Algorithm 3 Estimation of transliteration probabilities, e-to-f direction 1: unfiltered data \u2190list of word pairs 2: filtered data \u2190transliteration pairs extracted using Algorithm 1 3: Train a transliteration system on the filtered data 4: for all e do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "nbestT I(e) \u2190 10 best transliterations for e according to the transliteration system 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "cooc(e) \u2190 set of all f that cooccur with e in a parallel sentence 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "candidateT I(e) \u2190 cooc(e) \u222a nbestT I(e) 8: end for 9: for all f do 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "p moses (f, e) \u2190 joint transliteration probability of e and f according to the transliterator 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "p ti (f |e) \u2190 pmoses(f,e) P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "f \u2208CandidateT I(e) pmoses(f ,e) 12: end for probability. We use the constraint decoding option of Moses to compute the joint probability of e and f. It computes the probability by dividing the translation score of the best target sentence given a source sentence by the normalization factor. We combine the transliteration probabilities with the translation probabilities of the IBM models and the HMM model. The normal translation probability p ta (f |e) of the word alignment models is computed with relative frequency estimates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "We smooth the alignment frequencies by adding the transliteration probabilities weighted by the factor \u03bb and get the following modified translation probabilitie\u015d p(f |e) = f ta (f, e) + \u03bbp ti (f |e) f ta (e) + \u03bb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "where f ta (f, e) = p ta (f |e)f (e). p ta (f |e) is obtained from the original t-table of the alignment model. f (e) is the total corpus frequency of e. \u03bb is the transliteration weight which is optimized for every language pair (see section 4.3.2). Apart from the definition of the weight \u03bb, our smoothing method is equivalent to Witten-Bell smoothing. We smooth after every iteration of the IBM models and the HMM model except the last iteration of each model. Algorithm 4 shows the smoothing for IBM Model4. IBM Model1 and the HMM model are smoothed in the same way. We also apply Algorithm 3 and Algorithm 4 in the alignment direction Algorithm 4 Interpolation with the IBM Model4, eto-f direction 1: {We want to run four iterations of Model4} 2: f (e) \u2190 total frequency of e in the corpus 3: Run MGIZA++ for one iteration of Model4 4: I \u2190 1 5: while I < 4 do f ta (f, e) \u2190 p ta (f |e)f (e) for all (f, e) 8:p(f |e) \u2190 fta(f,e)+\u03bbpti (f |e) fta(e)+\u03bb for all (f, e) 9:",
"cite_spans": [
{
"start": 936,
"end": 942,
"text": "(f |e)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "Resume MGIZA++ training for 1 iteration using the modified t-table probabilitiesp(f |e)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "I \u2190 I + 1 11: end while f to e. The final alignments are generated using the grow-diag-final-and heuristic (Koehn et al., 2003) .",
"cite_spans": [
{
"start": 107,
"end": 127,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modified EM Training of the Word Alignment Models",
"sec_num": "4.3.1"
},
{
"text": "The English/Hindi corpus available from WA05 consists of training, development and test data. As development and test data for English/Arabic, we use manually created gold standard word alignments for 155 sentences extracted from the Hansards corpus released by LDC. We use 50 sentences for development and 105 sentences for test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3.2"
},
{
"text": "Baseline: We align the data sets using GIZA++ (Och and Ney, 2003) and refine the alignments using the grow-diag-final-and heuristic (Koehn et al., 2003) . We obtain the baseline F-measure by comparing the alignments of the test corpus with the gold standard alignments. Experiments We use GIZA++ with 5 iterations of Model1, 4 iterations of HMM and 4 iterations of Model4. We interpolate translation and transliteration probabilities at different iterations (and different combinations of iterations) of the three models and always observe an improvement in alignment quality. For the final experiments, we interpolate at every iteration of the IBM models and the HMM model except the last iteration of every model where we could not interpolate for technical reasons. 5 Algo- 5 We had problems in resuming MGIZA++ training when training was supposed to continue from a different model, such as if we stopped after the 5th iteration of Model1 and then tried to resume MGIZA++ from the first iteration of the HMM model. In this case, we ran the 5th iteration of Model1, then the first iteration of the HMM and only then stopped for interpola-rithm 4 shows the interpolation of the transliteration probabilities with IBM Model4. We used the same procedure with IBM Model1 and the HMM model.",
"cite_spans": [
{
"start": 46,
"end": 65,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF13"
},
{
"start": 132,
"end": 152,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF8"
},
{
"start": 777,
"end": 778,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3.2"
},
{
"text": "The parameter \u03bb is optimized on development data for every language pair. The word alignment system is not very sensitive to \u03bb. Any \u03bb in the range between 50 and 100 works fine for all language pairs. The optimization helps to maximize the improvement in word alignment quality. For our experiments, we use \u03bb = 80.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3.2"
},
{
"text": "On test data, we achieve an improvement of approximately 10% and 13.5% in precision and 3.5% and 13.5% in recall on English/Hindi and English/Arabic word alignment, respectively. Table 4 shows the scores of the baseline and our word alignment model. We compared our word alignment results with the systems presented at WA05. Three systems, one limited and two un-limited, participated in the English/Hindi task. We outperform the limited system and one un-limited system.",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 186,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3.2"
},
{
"text": "Lang P b R b F b P ti R ti F ti EH",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3.2"
},
{
"text": "Previous work on transliteration mining uses a manually labelled set of training data to extract transliteration pairs from a parallel corpus or comparable corpora. The training data may contain a few hundred randomly selected transliteration pairs from a transliteration dictionary (Yoon et al., 2007; Lee and Chang, 2003) or just a few carefully selected transliteration pairs (Sherif and Kondrak, 2007; Klementiev and Roth, 2006) . Our work is more challenging as we extract transliteration pairs without using transliteration dictionaries or gold standard transliteration pairs. Klementiev and Roth (2006) initialize their transliteration model with a list of 20 transliteration tion; so we did not interpolate in just those iterations of training where we were transitioning from one model to the next. pairs. Their model makes use of temporal scoring to rank the candidate transliterations. A lot of work has been done on discovering and learning transliterations from comparable corpora by using temporal and phonetic information (Tao et al., 2006; Klementiev and Roth, 2006; . We do not have access to this information. Sherif and Kondrak (2007) train a probabilistic transducer on 14 manually constructed transliteration pairs of English/Arabic. They iteratively extract transliteration pairs from the test data and add them to the training data. Our method is different from the method of Sherif and Kondrak (2007) as our method is fully unsupervised, and because in each iteration, they add the most probable transliteration pairs to the training data, while we filter out the least probable transliteration pairs from the training data.",
"cite_spans": [
{
"start": 283,
"end": 302,
"text": "(Yoon et al., 2007;",
"ref_id": "BIBREF19"
},
{
"start": 303,
"end": 323,
"text": "Lee and Chang, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 379,
"end": 405,
"text": "(Sherif and Kondrak, 2007;",
"ref_id": "BIBREF14"
},
{
"start": 406,
"end": 432,
"text": "Klementiev and Roth, 2006)",
"ref_id": "BIBREF7"
},
{
"start": 583,
"end": 609,
"text": "Klementiev and Roth (2006)",
"ref_id": "BIBREF7"
},
{
"start": 1037,
"end": 1055,
"text": "(Tao et al., 2006;",
"ref_id": "BIBREF17"
},
{
"start": 1056,
"end": 1082,
"text": "Klementiev and Roth, 2006;",
"ref_id": "BIBREF7"
},
{
"start": 1128,
"end": 1153,
"text": "Sherif and Kondrak (2007)",
"ref_id": "BIBREF14"
},
{
"start": 1399,
"end": 1424,
"text": "Sherif and Kondrak (2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Research",
"sec_num": "5"
},
{
"text": "The transliteration mining systems of the four NEWS10 participants are either based on discriminative or on generative methods. All systems use manually labelled (seed) data for the initial training. The system based on the edit distance method submitted by Jiampojamarn et al. (2010) performs best for the English/Russian task. Jiampojamarn et al. (2010) submitted another system based on a standard n-gram kernel which ranked first for the English/Hindi and English/Tamil tasks. 6 For the English/Arabic task, the transliteration mining system of Noeman and Madkour (2010) was best. They normalize the English and Arabic characters in the training data which increases the recall. 7 Our transliteration extraction method differs in that we extract transliteration pairs from a parallel corpus without supervision. The results of the NEWS10 experiments (Kumaran et al., 2010) show that no single system performs well on all language pairs. Our unsupervised method seems robust as its performance is similar to or better than many of the semi-supervised systems on three language pairs.",
"cite_spans": [
{
"start": 258,
"end": 284,
"text": "Jiampojamarn et al. (2010)",
"ref_id": "BIBREF6"
},
{
"start": 683,
"end": 684,
"text": "7",
"ref_id": null
},
{
"start": 854,
"end": 876,
"text": "(Kumaran et al., 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Research",
"sec_num": "5"
},
{
"text": "We are only aware of one previous work which uses transliteration information for word alignment. 6 They use the seed data as positive examples. In order to obtain also negative examples, they generate all possible word pairs from the source and target words in the seed data and extract the ones which are not transliterations but have a common substring of some minimal length.",
"cite_spans": [
{
"start": 98,
"end": 99,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Research",
"sec_num": "5"
},
{
"text": "7 They use the phrase table of Moses to build a mapping table between source and target characters. The mapping table is then used to construct a finite state transducer. Hermjakob (2009) proposed a linguistically focused word alignment system which uses many features including hand-crafted transliteration rules for Arabic/English alignment. His evaluation did not explicitly examine the effect of transliteration (alone) on word alignment. We show that the integration of a transliteration system based on unsupervised transliteration mining increases the word alignment quality for the two language pairs we tested.",
"cite_spans": [
{
"start": 171,
"end": 187,
"text": "Hermjakob (2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Research",
"sec_num": "5"
},
{
"text": "We proposed a method to automatically extract transliteration pairs from parallel corpora without supervision or linguistic knowledge. We evaluated it against the semi-supervised systems of NEWS10 and achieved high F-measure and performed better than most of the semi-supervised systems. We also evaluated our method on parallel corpora and achieved high F-measure. We integrated the transliteration extraction module into the GIZA++ word aligner and showed gains in alignment quality. We will release our transliteration mining system and word alignment system in the near future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Since we delete 5% from the filtered data, the number of deleted data items decreases in each iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We do not evaluate on the English/Chinese data because the Chinese data requires word segmentation which is beyond the scope of our work. Another problem is that our extraction method was developed for alphabetic languages and probably needs to be adapted before it is applicable to logographic languages such as Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We do not use the seed data in our system. However, to check the correctness of the stopping point, we tested the transliteration system on the seed data (available with NEWS10) for every iteration of Algorithm 2. We verified that the median9 held-out statistics and accuracy on the seed data have their peaks at the same iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This solution is appropriate for all of the language pairs used in our experiments, but should be revisited if there is inflection realized as prefixes, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors wish to thank the anonymous reviewers for their comments. We would like to thank Christina Lioma for her valuable feedback on an earlier draft of this paper. Hassan Sajjad was funded by the Higher Education Commission (HEC) of Pakistan. Alexander Fraser was funded by Deutsche Forschungsgemeinschaft grant Models of Morphosyntax for Statistical Machine Translation. Helmut Schmid was supported by Deutsche Forschungsgemeinschaft grant SFB 732.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Jointsequence models for grapheme-to-phoneme conversion",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Bisani",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2008,
"venue": "Speech Communication",
"volume": "",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Bisani and Hermann Ney. 2008. Joint- sequence models for grapheme-to-phoneme conver- sion. Speech Communication, 50(5).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The mathematics of statistical machine translation: parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Transliteration mining with phonetic conflation and iterative training",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kareem Darwish. 2010. Transliteration mining with phonetic conflation and iterative training. In Proceed- ings of the 2010 Named Entities Workshop, Uppsala, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "MultiUN: A multilingual corpus from United Nation documents",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Eisele",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Eisele and Yu Chen. 2010. MultiUN: A multi- lingual corpus from United Nation documents. In Pro- ceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10), Val- letta, Malta.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Parallel implementations of word alignment tool",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2008,
"venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Gao and Stephan Vogel. 2008. Parallel implemen- tations of word alignment tool. In Software Engineer- ing, Testing, and Quality Assurance for Natural Lan- guage Processing, Columbus, Ohio, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improved word alignment with statistics and linguistic heuristics",
"authors": [
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulf Hermjakob. 2009. Improved word alignment with statistics and linguistic heuristics. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 -Volume 1, EMNLP '09, Morristown, NJ, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Transliteration generation and mining with limited training resources",
"authors": [
{
"first": "Sittichai",
"middle": [],
"last": "Jiampojamarn",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Dwyer",
"suffix": ""
},
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Bhargava",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "Mi-Young",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sittichai Jiampojamarn, Kenneth Dwyer, Shane Bergsma, Aditya Bhargava, Qing Dou, Mi-Young Kim, and Grzegorz Kondrak. 2010. Transliteration generation and mining with limited training resources. In Pro- ceedings of the 2010 Named Entities Workshop, Upp- sala, Sweden. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Weakly supervised named entity transliteration and discovery from multilingual comparable corpora",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Klementiev",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Klementiev and Dan Roth. 2006. Weakly supervised named entity transliteration and discovery from multilingual comparable corpora. In Proceed- ings of the 21st International Conference on Compu- tational Linguistics and the 44th annual meeting of the ACL, Morristown, NJ, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Whitepaper of news 2010 shared task on transliteration mining",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu ; A Kumaran",
"suffix": ""
},
{
"first": "Mitesh",
"middle": [
"M"
],
"last": "Khapra",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Human Language Technology and North Ameri- can Association for Computational Linguistics Con- ference, pages 127-133, Edmonton, Canada. A Kumaran, Mitesh M. Khapra, and Haizhou Li. 2010. Whitepaper of news 2010 shared task on translitera- tion mining. In Proceedings of the 2010 Named En- tities Workshop the 48th Annual Meeting of the ACL, Uppsala, Sweden.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Acquisition of English-Chinese transliterated word pairs from parallel-aligned texts using a statistical machine transliteration model",
"authors": [
{
"first": "Chun-Jen",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the HLT-NAACL 2003 Workshop on Building and using parallel texts",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chun-Jen Lee and Jason S. Chang. 2003. Acqui- sition of English-Chinese transliterated word pairs from parallel-aligned texts using a statistical machine transliteration model. In Proceedings of the HLT- NAACL 2003 Workshop on Building and using parallel texts, Morristown, NJ, USA. ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word alignment for languages with scarce resources",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2005,
"venue": "ParaText '05: Proceedings of the ACL Workshop on Building and Using Parallel Texts",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Martin, Rada Mihalcea, and Ted Pedersen. 2005. Word alignment for languages with scarce resources. In ParaText '05: Proceedings of the ACL Workshop on Building and Using Parallel Texts, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Language independent transliteration mining system using finite state automata framework",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Noeman",
"suffix": ""
},
{
"first": "Amgad",
"middle": [],
"last": "Madkour",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Noeman and Amgad Madkour. 2010. Language independent transliteration mining system using finite state automata framework. In Proceedings of the 2010",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Named Entities Workshop, Uppsala, Sweden. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bootstrapping a stochastic transducer for Arabic-English transliteration extraction",
"authors": [
{
"first": "Tarek",
"middle": [],
"last": "Sherif",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tarek Sherif and Grzegorz Kondrak. 2007. Boot- strapping a stochastic transducer for Arabic-English transliteration extraction. In ACL, Prague, Czech Re- public.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Named entity transliteration with comparable corpora",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Sproat, Tao Tao, and ChengXiang Zhai. 2006. Named entity transliteration with comparable corpora. In ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "SRILM -an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Intl. Conf. Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -an extensible language modeling toolkit. In Intl. Conf. Spoken Language Pro- cessing, Denver, Colorado.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unsupervised named entity transliteration using temporal and phonetic correlation",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Su-Yoon Yoon",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Fister",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Tao, Su-Yoon Yoon, Andrew Fister, Richard Sproat, and ChengXiang Zhai. 2006. Unsupervised named entity transliteration using temporal and phonetic correlation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Sydney.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "HMM-based word alignment in statistical translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "16th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical trans- lation. In 16th International Conference on Computa- tional Linguistics, pages 836-841, Copenhagen, Den- mark.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multilingual transliteration using feature based phonetic method",
"authors": [
{
"first": "Kyoung-Young",
"middle": [],
"last": "Su-Youn Yoon",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su-Youn Yoon, Kyoung-Young Kim, and Richard Sproat. 2007. Multilingual transliteration using feature based phonetic method. In Proceedings of the 45th Annual Meeting of the ACL, Prague, Czech Republic.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "p ta (f |e) in the t-table of Model4 7:"
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "Cognates from English/Russian corpus extracted by our system as transliteration pairs. None of them are correct transliteration pairs according to the gold standard.",
"content": "<table/>",
"html": null
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "Transliteration mining results using the parallel corpus of English/Hindi (EH) and English/Arabic (EA) against the gold standard",
"content": "<table/>",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "Word alignment results on the test data of English/Hindi (EH) and English/Arabic (EA) where P b is the precision of baseline GIZA++ and P ti is the precision of our word alignment system",
"content": "<table/>",
"html": null
}
}
}
}