ACL-OCL / Base_JSON /prefixN /json /N12 /N12-1025.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:05:31.088562Z"
},
"title": "Transliteration Mining Using Large Training and Test Sets",
"authors": [
{
"first": "Ali",
"middle": [
"El"
],
"last": "Kahki",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": "",
"affiliation": {},
"email": "kdarwish@qf.org.qa"
},
{
"first": "Ahmed",
"middle": [],
"last": "Saad",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "El",
"middle": [],
"last": "Din",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mohamed",
"middle": [
"Abd"
],
"last": "El-Wahab",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Much previous work on Transliteration Mining (TM) was conducted on short parallel snippets using limited training data, and successful methods tended to favor recall. For such methods, increasing training data may impact precision and application on large comparable texts may impact precision and recall. We adapt a state-of-the-art TM technique with the best reported scores on the ACL 2010 NEWS workshop dataset, namely graph reinforcement, to work with large training sets. The method models observed character mappings between language pairs as a bipartite graph and unseen mappings are induced using random walks. Increasing training data yields more correct initial mappings but induced mappings become more error prone. We introduce parameterized exponential penalty to the formulation of graph reinforcement and we estimate the proper parameters for training sets of varying sizes. The new formulation led to sizable improvements in precision. Mining from large comparable texts leads to the presence of phonetically similar words in target and source texts that may not be transliterations or may adversely impact candidate ranking. To overcome this, we extracted related segments that have high translation overlap, and then we performed TM on them. Segment extraction produced significantly higher precision for three different TM methods.",
"pdf_parse": {
"paper_id": "N12-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "Much previous work on Transliteration Mining (TM) was conducted on short parallel snippets using limited training data, and successful methods tended to favor recall. For such methods, increasing training data may impact precision and application on large comparable texts may impact precision and recall. We adapt a state-of-the-art TM technique with the best reported scores on the ACL 2010 NEWS workshop dataset, namely graph reinforcement, to work with large training sets. The method models observed character mappings between language pairs as a bipartite graph and unseen mappings are induced using random walks. Increasing training data yields more correct initial mappings but induced mappings become more error prone. We introduce parameterized exponential penalty to the formulation of graph reinforcement and we estimate the proper parameters for training sets of varying sizes. The new formulation led to sizable improvements in precision. Mining from large comparable texts leads to the presence of phonetically similar words in target and source texts that may not be transliterations or may adversely impact candidate ranking. To overcome this, we extracted related segments that have high translation overlap, and then we performed TM on them. Segment extraction produced significantly higher precision for three different TM methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Transliteration Mining (TM) is the process of finding transliterations in parallel or comparable texts of different languages. For example, given the Arabic-English word sequence pairs: ( \u202b\ufeeb\ufbac\ufbaa\u06be\ufe8e\ufedf\ufef2\u202c \u202b\ufe8d\u0627\ufedf\ufee4\ufee0\ufeda\u202c \u202b,\ufeb3\ufefc\ufeb3\ufef2\u202c Haile Selassie I of Ethiopia), successful TM would mine the transliterations: \u202b,\ufeeb\ufbac\ufbaa\u06be\ufe8e\ufedf\ufef2(\u202c Haile) and \u202b,\ufeb3\ufefc\ufeb3\ufef2(\u202c Selassie). TM has been shown to be effective in several Information Retrieval (IR) and Natural Language Processing (NLP) applications. For example, in cross language IR, TM was used to handle out-of-vocabulary query words by mining transliterations between words in queries and top n retrieved documents and then using transliterations to expand queries (Udupa et al., 2009a) . In Machine Translation (MT), TM can improve alignment at training time and help enrich phrase tables with named entities that may not appear in parallel training data. More broadly, TM is a character mapping problem. Having good character mapping models can be beneficial in a variety of applications such as learning stemming models, learning spelling transformations between similar languages, and finding variant spellings of names (Udupa and Kumar, 2010b) .",
"cite_spans": [
{
"start": 679,
"end": 700,
"text": "(Udupa et al., 2009a)",
"ref_id": "BIBREF29"
},
{
"start": 1138,
"end": 1162,
"text": "(Udupa and Kumar, 2010b)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "TM has attracted interest in recent years with a dedicated evaluation in the ACL 2010 NEWS workshop. In that evaluation, TM was performed using limited training data, namely 1,000 parallel transliteration word-pairs, on short parallel text segments, namely cross-language Wikipedia titles which were typically a few words long. Since TM was performed on very short parallel segments, the chances that two phonetically similar words would appear within such a short text segment in one language were typically very low. Also, since TM training datasets were small, many valid mappings were not observed in training. For these two reasons, most of the successful techniques related to that evaluation have focused on improving recall, while hurting precision slightly. Some of these techniques involved the use of letter conflation based on a SOUNDEX like scheme (Darwish, 2010; Oh and Choi, 2006) and character n-gram similarity. The most successful technique on ACL-NEWS dataset, involved the use of graph reinforcement in which observed mappings between language pairs were modeled using a bipartite graph and unseen mappings were induced using random walks (El- Kahki et al., 2011) .",
"cite_spans": [
{
"start": 861,
"end": 876,
"text": "(Darwish, 2010;",
"ref_id": "BIBREF3"
},
{
"start": 877,
"end": 895,
"text": "Oh and Choi, 2006)",
"ref_id": "BIBREF23"
},
{
"start": 1164,
"end": 1183,
"text": "Kahki et al., 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper, we focus on improving TM between Arabic and English in more realistic settings, compared to the NEWS workshop dataset. Specifically, we focus on the cases where: 1. Relatively large TM training sets, which are typical of production systems, are available. As we will show, using more training data in conjunction with recall-oriented techniques that perform well on small training sets can adversely hurt precision, leading to drops in F-measure. A more fundamental question is what constitutes \"large\" versus \"small\" training sets. Ideally, we want a unified solution for training sets of varying sizes. 2. TM is performed on large comparable texts which are ubiquitously available from different sources such as cross language news and Wikipedia articles. In this case, there are two phenomena that arise. First, there is an increased probability (compared to short texts) that words in the target and source texts may be phonetically similar, while not being transliterations of each other. One such example is the Arabic word \u202b,\"\ufee3\ufee6\"\u202c which means \"in\" and is pronounced as \"min\" and the English word \"men\". Such cases adversely affect precision. Second, given a source language word, there may be multiple target language words that are phonetically similar and TM may rank a wrong word higher than the correct one. For example, consider the Arabic word \u202b,\"\ufe9f\ufeee\"\u202c which is pronounced as \"joe\" but is in fact the rendition of the Chinese name \"Zhou\". If the English text has words such as \"jaw\", \"joe\", \"jo\", \"joy\", etc., one of them may rank higher than \"Zhou\". Since only the top choice is considered, this phenomenon would hurt precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We address these two situations by making the following two contributions: 1. Modifying the TM technique with the best reported results on the ACL 2010 NEWS workshop, namely graph reinforcement (El- Kahki et al., 2011) to handle training sets of arbitrary sizes by introducing parameterized exponential penalty to the mapping induction process. We show that we can effectively learn the parameters that tune the penalty for two different training sets of varying sizes. In doing so, we achieve better results for graph reinforcement with larger training sets. 2. For large comparable texts, we use contextual clues, namely translations of neighboring words, to constrain TM and to preserve precision. Specifically, we initially extract text segments that are \"related\" based on cross lingual lexical overlap, and then we perform TM on these segments. Though there have been some papers on extracting sub-sentence alignments from comparable text (Hewavitharana and Vogel, 2011; Munteanu and Marcu, 2006) , extracting related (as opposed to parallel) text segments may be preferable because: 1) transliterations may not occur in parallel contexts; 2) using simple lexical overlap is efficient; and as we will show 3) simultaneous use of phonetic and contextual evidences may be sufficient to produce high TM precision. Alternate solutions focused on performing TM on extracted named entities only (Udupa et al., 2009b) . Some drawbacks of such an approach are: 1) named entity recognition (NER) may not be available for many languages; and 2) NER has inherently low recall for languages such as Arabic where no discriminating features such as capitalization exist. The remainder of the paper is organized as follows: Section 2 provides background on TM; Section 3 describes the basic TM system that is used in the paper; Section 4 describes graph reinforcements, shows how it fairs in the presence of a large training set, and introduces modifications to graph reinforcement to improve its effectiveness with such data; Section 5 introduces the use of contextual clues to improve TM and reports on its effectiveness; and Section 6 concludes the paper.",
"cite_spans": [
{
"start": 199,
"end": 218,
"text": "Kahki et al., 2011)",
"ref_id": "BIBREF4"
},
{
"start": 945,
"end": 976,
"text": "(Hewavitharana and Vogel, 2011;",
"ref_id": "BIBREF7"
},
{
"start": 977,
"end": 1002,
"text": "Munteanu and Marcu, 2006)",
"ref_id": "BIBREF20"
},
{
"start": 1395,
"end": 1416,
"text": "(Udupa et al., 2009b)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Much work has been done on TM for different language pairs such as English-Chinese (Kuo et al., 2006; Kuo et al., 2007; Kuo et al., 2008; Jin et al. 2008; ) , English-Tamil (Saravanan and Kumaran, 2008; Udupa and Khapra, 2010) , English-Korean (Oh and Isahara, 2006; Oh and Choi, 2006) , English-Japanese (Qu et al., 2000; Brill et al., 2001; Oh and Isahara, 2006) , English-Hindi (Fei et al., 2003; Mahesh and Sinha, 2009) , and English-Russian (Klementiev and Roth, 2006) . TM typically involves finding character mappings between two languages and using these mappings to ascertain if two words are transliterations or not.",
"cite_spans": [
{
"start": 83,
"end": 101,
"text": "(Kuo et al., 2006;",
"ref_id": "BIBREF15"
},
{
"start": 102,
"end": 119,
"text": "Kuo et al., 2007;",
"ref_id": "BIBREF16"
},
{
"start": 120,
"end": 137,
"text": "Kuo et al., 2008;",
"ref_id": "BIBREF17"
},
{
"start": 138,
"end": 154,
"text": "Jin et al. 2008;",
"ref_id": "BIBREF9"
},
{
"start": 155,
"end": 156,
"text": ")",
"ref_id": null
},
{
"start": 173,
"end": 202,
"text": "(Saravanan and Kumaran, 2008;",
"ref_id": "BIBREF27"
},
{
"start": 203,
"end": 226,
"text": "Udupa and Khapra, 2010)",
"ref_id": "BIBREF31"
},
{
"start": 244,
"end": 266,
"text": "(Oh and Isahara, 2006;",
"ref_id": "BIBREF24"
},
{
"start": 267,
"end": 285,
"text": "Oh and Choi, 2006)",
"ref_id": "BIBREF23"
},
{
"start": 305,
"end": 322,
"text": "(Qu et al., 2000;",
"ref_id": null
},
{
"start": 323,
"end": 342,
"text": "Brill et al., 2001;",
"ref_id": "BIBREF2"
},
{
"start": 343,
"end": 364,
"text": "Oh and Isahara, 2006)",
"ref_id": "BIBREF24"
},
{
"start": 381,
"end": 399,
"text": "(Fei et al., 2003;",
"ref_id": "BIBREF5"
},
{
"start": 400,
"end": 423,
"text": "Mahesh and Sinha, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 446,
"end": 473,
"text": "(Klementiev and Roth, 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2."
},
{
"text": "To find character sequence mappings between two languages, the most common approach entails using automatic letter alignment of transliteration pairs. Automatic alignment can be performed using different algorithms such as EM (Kuo et al., 2008; Lee and Chang, 2003) or HMM-based alignment (Udupa et al., 2009a; Udupa et al., 2009b) . Another method uses automatic speech recognition confusion tables to extract phonetically equivalent character sequences to discover monolingual and cross-lingual pronunciation variations (Kuo and Yang, 2005) . Alternatively, letters can be mapped into a common character set using a predefined transliteration scheme (Darwish, 2010; Oh and Choi, 2006) .",
"cite_spans": [
{
"start": 226,
"end": 244,
"text": "(Kuo et al., 2008;",
"ref_id": "BIBREF17"
},
{
"start": 245,
"end": 265,
"text": "Lee and Chang, 2003)",
"ref_id": "BIBREF19"
},
{
"start": 289,
"end": 310,
"text": "(Udupa et al., 2009a;",
"ref_id": "BIBREF29"
},
{
"start": 311,
"end": 331,
"text": "Udupa et al., 2009b)",
"ref_id": "BIBREF30"
},
{
"start": 522,
"end": 542,
"text": "(Kuo and Yang, 2005)",
"ref_id": "BIBREF18"
},
{
"start": 652,
"end": 667,
"text": "(Darwish, 2010;",
"ref_id": "BIBREF3"
},
{
"start": 668,
"end": 686,
"text": "Oh and Choi, 2006)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finding Character Mappings",
"sec_num": "2.1"
},
{
"text": "For the problem of ascertaining if two words can be transliterations of each other, a common approach involves using a generative model that attempts to generate all possible transliterations of a source word, given the character mappings between two languages, and restricting the output to words in the target language (Fei et al., 2003; Chang, 2003, Udupa et al., 2009a) . This is similar to the baseline approach that we used in this paper. Noeman and Madkour (2010) implemented this technique using a finite state automaton by generating all possible transliterations along with weighted edit distance and then filtered them using appropriate thresholds and target language words. El- Kahki et al. (2011) combined a generative model with so-called graph reinforcement, which is described in greater detail in Section 4. They reported the best TM results on the ACL 2010 NEWS workshop dataset for 4 different languages. Alternatively backtransliteration can be used to determine if one sequence could have been generated by successively mapping character sequences from one language into another (Brill et al., 2001; Bilac and Tanaka, 2005; Oh and Isahara, 2006) . Udupa and Khapra (2010) proposed a method in which transliteration candidates are mapped into a \"low-dimensional common representation space\". Then, the similarity between the resultant feature vectors for both candidates can be computed. A similar approach uses context sensitive hashing (Udupa and Kumar, 2010) . Jiampojamarn et al. (2010) used classification to determine if source and target language words were valid transliterations. They used a variety of features including edit distance between an English token and the Romanized versions of the foreign token, forward and backward transliteration probabilities, and character n-gram similarity. Udupa et al. (2009b) used a similar classificationbased approach.",
"cite_spans": [
{
"start": 321,
"end": 339,
"text": "(Fei et al., 2003;",
"ref_id": "BIBREF5"
},
{
"start": 340,
"end": 373,
"text": "Chang, 2003, Udupa et al., 2009a)",
"ref_id": null
},
{
"start": 445,
"end": 470,
"text": "Noeman and Madkour (2010)",
"ref_id": "BIBREF21"
},
{
"start": 690,
"end": 709,
"text": "Kahki et al. (2011)",
"ref_id": "BIBREF4"
},
{
"start": 1100,
"end": 1120,
"text": "(Brill et al., 2001;",
"ref_id": "BIBREF2"
},
{
"start": 1121,
"end": 1144,
"text": "Bilac and Tanaka, 2005;",
"ref_id": "BIBREF1"
},
{
"start": 1145,
"end": 1166,
"text": "Oh and Isahara, 2006)",
"ref_id": "BIBREF24"
},
{
"start": 1169,
"end": 1192,
"text": "Udupa and Khapra (2010)",
"ref_id": "BIBREF31"
},
{
"start": 1458,
"end": 1481,
"text": "(Udupa and Kumar, 2010)",
"ref_id": "BIBREF32"
},
{
"start": 1484,
"end": 1510,
"text": "Jiampojamarn et al. (2010)",
"ref_id": "BIBREF8"
},
{
"start": 1824,
"end": 1844,
"text": "Udupa et al. (2009b)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration Mining",
"sec_num": "2.2"
},
{
"text": "We used a generative TM model that was trained on a set of transliteration pairs. We automatically aligned these pairs at character level using an HMM-based aligner akin to that of He (2007) . Alignment produced mappings between characters from both languages with associated probabilities. We restricted individual source language character sequences to be 3 characters at most. We always treated English as the target language and Arabic as the source language.",
"cite_spans": [
{
"start": 181,
"end": 190,
"text": "He (2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Baseline System",
"sec_num": "3.1"
},
{
"text": "Briefly, we produced all possible segmentations of a source word along with their associated mappings into the target language. Valid target sequences were retained and sorted by the product of the constituent mapping probabilities. The candidate with the highest probability was generated given that the product of the mapping probabilities was higher than a certain threshold. Otherwise, no candidate was chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Baseline System",
"sec_num": "3.1"
},
{
"text": "The search for transliterated pairs was implemented as a variant of depth-first search (Pearl, 1984) , where states represented valid mappings between source and target substrings. At each step, the mapping with the best score was selected and expanded using the mappings learnt from alignment. This process ran until mapping combinations produced target word(s) from a source word or until all possible states were explored. The pseudo code in Figure 1 describes the details of the algorithm. The implementation was optimized via incremental left to right processing of source words, the use of a radix tree to prune invalid paths, and the use of a sorted priority queue to insure that the highest weighing candidate was at the top of the queue.",
"cite_spans": [
{
"start": 87,
"end": 100,
"text": "(Pearl, 1984)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 445,
"end": 453,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Description of the Baseline System",
"sec_num": "3.1"
},
{
"text": "We used a threshold on the minimum acceptable transliteration score to filter out unreliable transliterations. Fixing a uniform threshold would have caused the model to filter out long transliterations. Thus, we tied the threshold to the length of transliterated words. We assumed a threshold d for single character mappings and the transliteration threshold for a target word of length l would be ! . Since we did not have a validation set to estimate d, we created a synthetic validation set from the training set and then used crossvalidation to estimate d as follows: we split the training data into 5 folds for cross validation; we modified each validation fold by adding 5 random words to each target word in the transliteration pair; then we performed TM with varying thresholds on the validation fold and computed Fmeasure; and we ascertained the threshold that led to the highest F-measure for each fold and then took the average threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thresholding",
"sec_num": "3.2"
},
{
"text": "For Arabic, we performed letter normalization of the different forms of alef, alef maqsoura and ya, and ta marbouta and ha. For English, we casefolded all letters and removed accents, umlaut, and similar diacritic like marks (ex. \u00e1, \u00e2, \u00e4, \u00e0, \u00e3, \u0101, \u0105).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Processing",
"sec_num": "3.3"
},
{
"text": "To motivate graph reinforcement, consider the following example: if alignment produced the mappings \u202b,\ufec1\u0637(\u202c ti), \u202b,\ufec1\u0637(\u202c ta), \u202b,\ufe95\u062a(\u202c ti), and \u202b,\ufe95\u062a(\u202c t), then the mappings \u202b,\ufec1\u0637(\u202c t) and \u202b,\ufe95\u062a(\u202c ta) are likely validthough not observed. These mappings can be induced by traversing the following paths: \u202b\ufec1\u0637\u202c \u00e8\uf0e8 ti \u00e8\uf0e8 \u202b\ufe95\u062a\u202c \u00e8\uf0e8 t and \u202b\ufe95\u062a\u202c \u00e8\uf0e8 ti \u00e8\uf0e8 \u202b\ufec1\u0637\u202c \u00e8\uf0e8 ta respectively. In graph reinforcement, observed mappings were modeled as a bipartite graph with source (S) and target (T) character sequences and weighted with the learnt alignment probabilities (M). Thus the mapping between s \u2208 S and t \u2208 T was m(s,t). Graph reinforcement was performed by traversing the graph from S \u00e8\uf0e8 T \u00e8\uf0e8 S \u00e8\uf0e8 T in order to deduce new mappings. Given a source sequence s'\u2208 S and a target sequence t' \u2208 T, the deduced mapping weights were computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Graph Reinforcement",
"sec_num": "4.1"
},
{
"text": "\u2032 \u2032 = 1 \u2212 1 \u2212 \u2032 \u2032 \u2200!\u2208!,!\u2208!",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Graph Reinforcement",
"sec_num": "4.1"
},
{
"text": "where the term \u2032 \u2032 is the score of the path between \u2032 and s\u2032. De Morgan's law was applied to aggregate different paths using an OR operator, which involved taking the negation of negations of all possible paths aggregated by an AND operator. Hence, the 1: Input: Mappings, set of mappings from source fragment to a list of target fragments and mapping Probability . 2: Input: SourceWord ( \u2208 1 ), Source language word 3: Input: TargetWords, radix tree containing all target language words ( 1 ) 4: Data Structures: DFS, Priority queue to store candidate transliterations pair ordered by their transliteration score -Each candidate transliteration tuple = (SourceFragment, TargetTransliteration, TransliterationScore). 5: StartSymbol = (\"\", \"\", 1.0); DFS={StartSymbol} 7: While (DFS is not empty) 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Graph Reinforcement",
"sec_num": "4.1"
},
{
"text": "SourceFragment= DFS.Top().SourceFragment 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Graph Reinforcement",
"sec_num": "4.1"
},
{
"text": "TargetFragment= DFS.Top(). probability of an inferred mapping would be boosted if it was obtained from multiple paths. Since some characters, mainly vowels, have a tendency to map to many other characters, link reweighting was applied after each iteration. Link reweighting had the effect of decreasing the weights of target character sequences that have many source character sequences mapping to them and hence reducing the effect of incorrectly inducing mappings. Link reweighting was performed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Graph Reinforcement",
"sec_num": "4.1"
},
{
"text": "\u2032 | = !(!|!) !(! ! |!) ! ! \u2208!",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Graph Reinforcement",
"sec_num": "4.1"
},
{
"text": "Where s i \u2208 S is a source sequence that maps to t. This is akin to normalizing conditional probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Graph Reinforcement",
"sec_num": "4.1"
},
{
"text": "We tested graph reinforcement using 10 iterations in 2 different settings, namely: 1. NEWS-1k: Using the ACL-NEWS workshop dataset. The dataset contained 1,000 parallel transliteration word pairs for training and 1,000 parallel Wikipedia titles for testing. 2. NEWS-10k: Using the test part of the ACL-NEWS dataset, while training with 10,000 manually curated parallel transliterations. Table 1 reports on the results of the graph reinforcement results for the two setups. In the NEWS-1k setup, graph reinforcement generally had a positive effect on recall at the expense of precision. However, as we suspected, increasing the amount of training data (as in the NEWS-10k) led to more initial mappings from alignment, but with many erroneously induced mappings that adversely impacted precision. Though recall improved significantly, precision deteriorated significantly, leading to lower F-measure. ",
"cite_spans": [],
"ref_spans": [
{
"start": 387,
"end": 394,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Graph Reinforcement Results",
"sec_num": "4.2"
},
{
"text": "To overcome the problem demonstrated in the NEWS-10k setup, we adjusted the graph reinforcement formula to give more confidence to mappings that were observed due to initial alignment and to successively penalize mappings that were induced in later graph reinforcement iterations. The adjustment was as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifying Graph Reinforcement with Parameterized Exponential Penalty",
"sec_num": "4.3"
},
{
"text": "! \u2032 \u2032 = 1 \u2212 1 \u2212 !!! \u2032 \u2032 \u2022 1 \u2212 !!\" !!! \u2032 !!! !!! \u2032 !\u2208!,!\u2208!",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifying Graph Reinforcement with Parameterized Exponential Penalty",
"sec_num": "4.3"
},
{
"text": "Where the parameter \u03b1 adjusts how much we penalize induced mappings and i is the number of iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifying Graph Reinforcement with Parameterized Exponential Penalty",
"sec_num": "4.3"
},
{
"text": "! ! !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifying Graph Reinforcement with Parameterized Exponential Penalty",
"sec_num": "4.3"
},
{
"text": "is the mapping score at iteration i. Basically, newly seen links at iteration i are penalized by !!\" . The equation is similar to the earlier reinforcement equation but with all paths except the original path ! \u2192 ! multiplied by exponential penalty !!! . Since the ACL-NEWS dataset did not have a validation set to help us estimate \u03b1, we opted to use the approach we used earlier to estimate the proper thresholds, namely: we split the training data into 5 folds for cross validation; we modified each validation fold by adding 5 random words to each target word in the transliteration pair; and then we performed TM with varying values of \u03b1 and with 10 graph reinforcement iterations on the validation fold and computed precision and recall. For the 10k training set, we opted to use a 90/10 training/validation split of the training data, where the validation part was modified in the same manner as the validation folds of the ACL-NEWS datasets. We varied the value of \u03b1 between 0.0 and 1.0 with increments of 0.1 and with increments of 1 afterwards for values greater than 1. If two values of \u03b1 yielded the same F-measure (up to 3 decimal places), we favored the larger \u03b1, favoring precision. Figures 2 and 3 plot the precision and recall respectively on the validation (-valid) and test (-test) sets for the 1,000 pair training set. ",
"cite_spans": [
{
"start": 1274,
"end": 1282,
"text": "(-valid)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1197,
"end": 1212,
"text": "Figures 2 and 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Modifying Graph Reinforcement with Parameterized Exponential Penalty",
"sec_num": "4.3"
},
{
"text": "We applied exponential penalty on graph reinforcement with the estimated value of \u03b1 on the ACL-NEWS dataset as well as the 10k training set. Table 2 lists the estimated and optimal values of \u03b1 for the different datasets on the training and test sets respectively along with the F-measure obtained for these values of \u03b1. Table 2 also compares the results to the results from baseline and graph reinforcement without exponential penalty. Tables 3 and 4 show precision, recall, and F-measure results for training using ACL-NEWS datasets and the larger training set respectively. For the large dataset of 10k training words, using exponential penalty improved results noticeably, with a 16 basis points improvement in F-measure, and we were able to estimate the optimal \u03b1. For the smaller training set, using exponential penalty with the estimated \u03b1 marginally changed overall results by (-0.006) compared to the optimal \u03b1. The change in overall F-measure was generally small, with most of the degradation in recall being offset by improvements in precision. The small error in estimating \u03b1 for the ACL-NEWS dataset can be attributed to the small size of the validation set. Generally, smaller training sets require smaller values of \u03b1 to allow reinforcement to deduce more unseen mappings, increasing recall. Larger training sets require larger values of \u03b1 and exponential penalty becomes more important. The advantage of this formulation is that \u03b1 can be learned to match training sets of varying sizes. 1,942 in total) . To show the generality of using contextual clues, we tested TM using 3 different techniques, namely: the aforementioned baseline system, graph reinforcement, and using SOUNDEX-like letter conflation for English in the manner suggested by Darwish (2010) . This letter conflation involved removing vowels, \"H\", and 'W\"; and performing the following mappings:",
"cite_spans": [
{
"start": 1758,
"end": 1772,
"text": "Darwish (2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 320,
"end": 327,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 436,
"end": 450,
"text": "Tables 3 and 4",
"ref_id": null
},
{
"start": 1502,
"end": 1517,
"text": "1,942 in total)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Modified Graph Reinforcement Results",
"sec_num": "4.4"
},
{
"text": "B, F, P, V \u00e8\uf0e8 1 C, G, J, K, Q, S, X, Z \u00e8\uf0e8 2 D,T \u00e8\uf0e8 3 L \u00e8\uf0e8 4 M,N \u00e8\uf0e8 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modified Graph Reinforcement Results",
"sec_num": "4.4"
},
{
"text": "R \u00e8\uf0e8 6 Such letter conflation was shown to improve TM F-measure on the ACL-NEWS workshop from 0.73 to 0.85 (Darwish, 2010) . Table 5 reports the TM results on the Wikipedia articles. The increased size of the comparable text on which we were performing TM led to adverse effects on precision and recall for the baseline, graph reinforcement, and SOUNDEX setupswith 0.059 precision for SOUNDEX. Graph reinforcement performed slightly better than the baseline both in terms of precision and recall, but with such low precision values, TM may not be useful for many applications. As highlighted earlier, the reason behind the drop in precision was due to phonetically similar words that were in fact not transliterations. The reason behind the drop in recall was due to the following: when TM is performed, often the correct transliteration was found but not as the first candidate. Given that for evaluation we were considering the first candidate only, this hurt both precision and recall.",
"cite_spans": [
{
"start": 107,
"end": 122,
"text": "(Darwish, 2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Modified Graph Reinforcement Results",
"sec_num": "4.4"
},
{
"text": "To overcome the precision and recall problems, we used contextual information to improve TM for large comparable text. To do so, we filtered articles to extract potentially related fragments and then we applied TM on the extracted fragments. The filtering was performed based on lexical similarity between fragments. The idea was that words that do not share enough contexts were not likely to be transliterations. A byproduct of this approach was a significant reduction in TM running time since the search space was reduced. On the downside, this likely hurt recall as transliterations that do not share similar contexts could not be mined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Context to Improve TM",
"sec_num": "5.2"
},
{
"text": "To extract fragments with similar context we used a phrase table from a phrase-based MT system, which was akin to Moses (Koehn et al., 2007) , to detect similarity between fragments in articles. The MT system was trained using 14 million parallel Arabic-English sentence pairs. The extraction algorithm aimed to extract maximum length fragments that share contexts greater than a specific percentage of fragment lengths. The threshold that we used in our experimentation was 30%. When picking the threshold, our goal was to find transliterations that appear in similar and not necessarily identical contexts. The threshold was determined qualitatively on a validation set.",
"cite_spans": [
{
"start": 120,
"end": 140,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Context to Improve TM",
"sec_num": "5.2"
},
{
"text": "A brute force fragment extraction approach would extract all possible fragments in source and target articles, iterate on each word in each pair of fragments to find the mappings, and then include a fragment if the mappings count exceed the threshold. Such a brute force approach would have an order of N 3 M 3 , where N and M are the number of words in the source and target articles respectively. To improve the running time, we first removed stop words from the source list. Then, we created a list that contained the positions of each of the matching pairs in source and target articles sorted by source words' position. This operation had a complexity of O(NlogM). Next, we iterated on source fragments of different size, which was O(N 2 ), and added the positions of matches in the target article in a sorted list. This operation was O(KlogK) where K is the number of matches. Then, we iterated on extracted matches to find target fragment that satisfied the condition:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Context to Improve TM",
"sec_num": "5.2"
},
{
"text": "Fragment Length number of mappings \u2265 .3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Context to Improve TM",
"sec_num": "5.2"
},
{
"text": "The last step was O(K) in the worst case. The total complexity of this algorithm was O(N 2 KlogK) in the worst case, which had a much lower complexity than the brute force approach. In practice, the algorithm filtered 30 comparable pairs of articles with an average of 4.9k words for English and 3.6k for Arabic in less than 5 minutes. Details of the algorithm are shown in Figure 2 . Table 6 reports TM results on the extracted segments. As the results show, TM on extracted segments dramatically improved precision for all setups compared to TM on the full articles (as in Table 6 ). Except for the SOUNDEX setup, recall dropped by 9.3 and 8.3 basis points for the baseline and graph reinforcement setups respectively. Though F-measure dropped slightly for the baseline case and improved slightly for the reinforcement case, what is noteworthy is that precision was high enough to make TM practically useful for a variety of applications. The major advantage of the proposed technique is the achievement of relatively high precisioncomparable to precision on small text snippets. Though recall is relatively low, the ubiquity of comparable texts can help produce large mined transliterations of high quality. ",
"cite_spans": [],
"ref_spans": [
{
"start": 374,
"end": 382,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 385,
"end": 392,
"text": "Table 6",
"ref_id": "TABREF4"
},
{
"start": 575,
"end": 582,
"text": "Table 6",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Using Context to Improve TM",
"sec_num": "5.2"
},
{
"text": "In this paper, we explored the use of transliteration mining in the context of using large training and test sets. Since recent work was conducted on small parallel text segments that were just a few words long with limited training data, the state-ofthe-art techniques generally favored recall by inducing mappings that were unseen in training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "Since the parallel test segments were short, improvements in recall had a very small effect on precision. When we applied the best reported method in the literature using large training data or when performing TM on large comparable texts, drops in precision and recall were substantial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "We modified the formulation of graph reinforcement by introducing a parameterized exponential penalty to allow for the discovery of new letter mappings using graph walks while penalizing mappings that required more graph walk steps to be induced. We showed how to effectively estimate the exponential penalty parameter for training sets of different sizes. In the context of performing TM on short parallel segments using 10k training words, we improved TM precision from 0.689 to 0.976 at the expense of a small drop in recall from 0.960 to 0.948.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "What we observed for graph reinforcement is symptomatic of algorithms that may fail when more data is present. Other such examples include stemming for MT and IR. Generally, with more MT parallel data or bigger IR collections, stemming may become less useful or harmful. It is advantageous to parameterize algorithms for tuning for dataset of different sizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "When performing TM on large comparable texts, we initially filtered the text to produce short comparable text segments and then we performed TM on them. Though the approach is relatively simple, it led to pronounced improvement in TM precision from 0.650 to 0.946, with a drop in recall from 0.500 to 0.417. Given that comparable texts 1: Input: Matches, a list of matches between word position in source article and its mapping in the target article sorted by source position 2: Input: Source, list of source words; Target, list of target words 4: Output: ParallelFragments : List of pairs of parallel fragments 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "For startPosition=0 To Source.Lenght 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "For endPosition = startPosition + MinimumFragmentLengh To Source.Lenght 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "SortedList TargetMatches =[ ] 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "ForEach match Between startPosistion And endPosition In Matches 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "TargetMatches.Add (Matching[match] .targetPosition) 10: startItr=0; endItr=TargetMatches.Length -1 12:",
"cite_spans": [
{
"start": 18,
"end": 34,
"text": "(Matching[match]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "For i=0 to TargetMatches. are ubiquitous, improvements in precision are likely more important than drops in recall. For future work, we want to test the effect of improved TM in the context of different NLP applications such as MT and cross language IR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
}
],
"back_matter": [
{
"text": "Some of the experiments were conducted while the authors were at the Cairo Microsoft Innovation Center.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Paraphrasing with Bilingual Parallel Corpora. ACL-2005",
"authors": [
{
"first": "C",
"middle": [],
"last": "Bannard",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "597--604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Bannard, C. Callison-Burch. 2005. Paraphrasing with Bilingual Parallel Corpora. ACL-2005, pp. 597- 604.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Extracting transliteration pairs from comparable corpora",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bilac",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Tanaka",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bilac, H. Tanaka. 2005. Extracting transliteration pairs from comparable corpora. NLP-2005.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatically harvesting Katakana-English term pairs from search engine query logs",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kacmarcik",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "393--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Brill, G. Kacmarcik, C. Brockett. 2001. Automatically harvesting Katakana-English term pairs from search engine query logs. NLPRS 2001, pp. 393-399.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Transliteration Mining with Phonetic Conflation and Iterative Training. ACL NEWS workshop",
"authors": [
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Darwish. 2010. Transliteration Mining with Phonetic Conflation and Iterative Training. ACL NEWS workshop 2010.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improved Transliteration Mining Using Graph Reinforcement",
"authors": [
{
"first": "A",
"middle": [
"El"
],
"last": "Kahki",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saad El Din",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "El-Wahab",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hefny",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Ammar",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "1384--1393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. El Kahki, K. Darwish, A. Saad El Din, M. Abd El- Wahab, A. Hefny, W. Ammar. Improved Transliteration Mining Using Graph Reinforcement. EMNLP-2011. pp. 1384-1393.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Extracting Named Entity Translingual Equivalence with Limited Resources",
"authors": [
{
"first": "H",
"middle": [],
"last": "Fei",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2003,
"venue": "TALIP",
"volume": "2",
"issue": "2",
"pages": "124--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Fei, S. Vogel, A. Waibel. 2003. Extracting Named Entity Translingual Equivalence with Limited Resources. TALIP, 2(2):124-129.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using Word-Dependent Transition Models in HMM based Word Alignment for Statistical Machine Translation",
"authors": [
{
"first": "X",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL-07 2nd SMT workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. He. 2007. Using Word-Dependent Transition Models in HMM based Word Alignment for Statistical Machine Translation. ACL-07 2nd SMT workshop.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Extracting parallel phrases from comparable data",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hewavitharana",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2011,
"venue": "The 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Hewavitharana, S. Vogel. 2011. Extracting parallel phrases from comparable data. The 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, June 24-24, 2011, Portland, Oregon",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Transliteration Generation and Mining with Limited Training Resources. ACL NEWS workshop",
"authors": [
{
"first": "S",
"middle": [],
"last": "Jiampojamarn",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Dwyer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bhargava",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "M",
"middle": [
"Y"
],
"last": "Kim",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Jiampojamarn, K. Dwyer, S. Bergsma, A. Bhargava, Q. Dou, M.Y. Kim and G. Kondrak. 2010. Transliteration Generation and Mining with Limited Training Resources. ACL NEWS workshop 2010.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic Extraction of English-Chinese Transliteration Pairs using Dynamic Window and Tokenizer",
"authors": [
{
"first": "C",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "D",
"middle": [
"I"
],
"last": "Kim",
"suffix": ""
},
{
"first": "S",
"middle": [
"H"
],
"last": "Na",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "Sixth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Jin, D.I. Kim, S.H. Na, J.H. Lee. 2008. Automatic Extraction of English-Chinese Transliteration Pairs using Dynamic Window and Tokenizer. Sixth SIGHAN Workshop on Chinese Language Processing, 2008.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Named Entity Transliteration and Discovery from Multilingual Comparable Corpora. HLT Conf. of the North American Chapter of the ACL",
"authors": [
{
"first": "A",
"middle": [],
"last": "Klementiev",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "82--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Klementiev, D. Roth. 2006. Named Entity Transliteration and Discovery from Multilingual Comparable Corpora. HLT Conf. of the North American Chapter of the ACL, pages 82-88.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation. ACL-2007",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, E. Herbst (2007). Moses: Open Source Toolkit for Statistical Machine Translation. ACL-2007, Demo Session.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hitting the Right Paraphrases in Good Time",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kok",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: NAACL-2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kok, C. Brockett. 2010. Hitting the Right Paraphrases in Good Time. Human Language Technologies: NAACL-2010.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "ACL 2010",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "21--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Named Entities Workshop, ACL 2010, pages 21-28.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning Transliteration Lexicons from the Web. COLING-ACL-2006",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Kuo",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [
"K"
],
"last": "Yang",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "1129--1136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.S. Kuo, H. Li, Y.K. Yang. 2006. Learning Transliteration Lexicons from the Web. COLING- ACL-2006, 1129 -1136.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A phonetic similarity model for automatic extraction of transliteration pairs",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Kuo",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [
"K"
],
"last": "Yang",
"suffix": ""
}
],
"year": 2007,
"venue": "TALIP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.S. Kuo, H. Li, Y.K. Yang. 2007. A phonetic similarity model for automatic extraction of transliteration pairs. TALIP, 2007",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mining Transliterations from Web Query Results: An Incremental Approach",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Kuo",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "C",
"middle": [
"L"
],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "Sixth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.S. Kuo, H. Li, C.L. Lin. 2008. Mining Transliterations from Web Query Results: An Incremental Approach. Sixth SIGHAN Workshop on Chinese Language Processing, 2008.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Incorporating Pronunciation Variation into Extraction of Transliterated-term Pairs from Web Corpora",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Kuo",
"suffix": ""
},
{
"first": "Y",
"middle": [
"K"
],
"last": "Yang",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Chinese Language and Computing",
"volume": "15",
"issue": "1",
"pages": "33--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.S. Kuo, Y.K. Yang. 2005. Incorporating Pronunciation Variation into Extraction of Transliterated-term Pairs from Web Corpora. Journal of Chinese Language and Computing, 15 (1): (33-44).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Acquisition of English-Chinese transliterated word pairs from parallelaligned texts using a statistical machine transliteration model",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Lee",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2003,
"venue": "Workshop on Building and Using Parallel Texts, HLT-NAACL-2003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.J. Lee, J.S. Chang. 2003. Acquisition of English- Chinese transliterated word pairs from parallel- aligned texts using a statistical machine transliteration model. Workshop on Building and Using Parallel Texts, HLT-NAACL-2003, 2003.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Extracting parallel sub-sentential fragments from non-parallel corpora",
"authors": [
{
"first": "D",
"middle": [
"S"
],
"last": "Munteanu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.S. Munteanu, D. Marcu. 2006. Extracting parallel sub-sentential fragments from non-parallel corpora. ACL-2006, p.81-88.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Language Independent Transliteration Mining System Using Finite State Automata Framework",
"authors": [
{
"first": "S",
"middle": [],
"last": "Noeman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Madkour",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Noeman, A. Madkour. 2010. Language Independent Transliteration Mining System Using Finite State Automata Framework. ACL NEWS workshop 2010.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automated Mining Of Names Using Parallel Hindi-English Corpus",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mahesh",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sinha",
"suffix": ""
}
],
"year": 2009,
"venue": "7th Workshop on Asian Language Resources",
"volume": "",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mahesh, K. Sinha. 2009. Automated Mining Of Names Using Parallel Hindi-English Corpus. 7th Workshop on Asian Language Resources, ACL- IJCNLP 2009, pages 48-54, 2009.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Recognizing transliteration equivalents for enriching domain specific thesauri",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Oh",
"suffix": ""
},
{
"first": "K",
"middle": [
"S"
],
"last": "Choi",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "231--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.H. Oh, K.S. Choi. 2006. Recognizing transliteration equivalents for enriching domain specific thesauri. 3rd Intl. WordNet Conf., pp. 231-237, 2006.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Mining the Web for Transliteration Lexicons: Joint-Validation Approach",
"authors": [
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE/WIC/ACM Intl. Conf. on Web Intelligence (WI'06)",
"volume": "",
"issue": "",
"pages": "254--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jong-Hoon Oh, Hitoshi Isahara. 2006. Mining the Web for Transliteration Lexicons: Joint-Validation Approach. pp.254-261, 2006 IEEE/WIC/ACM Intl. Conf. on Web Intelligence (WI'06), 2006.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Automatic transliteration for Japanese-to-English text retrieval",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "353--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Qu, Gregory Grefenstette, David A. Evans. 2003. Automatic transliteration for Japanese-to-English text retrieval. SIGIR 2003:353-360",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The Web as a parallel corpus",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics -Special issue on web as corpus",
"volume": "29",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Resnik, N. Smith. 2003. The Web as a parallel corpus. Computational Linguistics -Special issue on web as corpus, Vol. 29 Issue 3, Sept. 2003",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Some Experiments in Mining Named Entity Transliteration Pairs from Comparable Corpora",
"authors": [
{
"first": "K",
"middle": [],
"last": "Saravanan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumaran",
"suffix": ""
}
],
"year": 2008,
"venue": "The 2nd Intl. Workshop on Cross Lingual Information Access: Addressing the Need of Multilingual Societies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K Saravanan, A Kumaran. 2008. Some Experiments in Mining Named Entity Transliteration Pairs from Comparable Corpora. The 2nd Intl. Workshop on Cross Lingual Information Access: Addressing the Need of Multilingual Societies, 2008.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Extracting parallel sentences from comparable corpora using document level alignment",
"authors": [
{
"first": "J",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: NAACL-2010",
"volume": "",
"issue": "",
"pages": "403--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Smith, C. Quirk, K. Toutanova. 2010. Extracting parallel sentences from comparable corpora using document level alignment, Human Language Technologies: NAACL-2010, p.403-411.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "They Are Out There, If You Know Where to Look\": Mining Transliterations of OOV Query Terms for Cross-Language Information Retrieval",
"authors": [
{
"first": "R",
"middle": [],
"last": "Udupa",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Saravanan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bakalov",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bhole",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Udupa, K. Saravanan, A. Bakalov, A. Bhole. 2009a. \"They Are Out There, If You Know Where to Look\": Mining Transliterations of OOV Query Terms for Cross-Language Information Retrieval. ECIR-2009.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "MINT: A Method for Effective and Scalable Mining of Named Entity Transliterations from Large Comparable Corpora",
"authors": [
{
"first": "R",
"middle": [],
"last": "Udupa",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Saravanan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kumaran",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Jagarlamudi",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Udupa, K. Saravanan, A. Kumaran, J. Jagarlamudi. 2009b. MINT: A Method for Effective and Scalable Mining of Named Entity Transliterations from Large Comparable Corpora. EACL 2009.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Transliteration Equivalence using Canonical Correlation Analysis. ECIR-2010",
"authors": [
{
"first": "R",
"middle": [],
"last": "Udupa",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Khapra",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Udupa, M. Khapra. 2010a. Transliteration Equivalence using Canonical Correlation Analysis. ECIR-2010, 2010.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Hashing-based Approaches to Spelling Correction of Personal Names",
"authors": [
{
"first": "R",
"middle": [],
"last": "Udupa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2010,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Udupa, S. Kumar. 2010b. Hashing-based Approaches to Spelling Correction of Personal Names. EMNLP 2010.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Mining Name Translations from Entity Graph Mapping",
"authors": [
{
"first": "G",
"middle": [
"W"
],
"last": "You",
"suffix": ""
},
{
"first": "S",
"middle": [
"W"
],
"last": "Hwang",
"suffix": ""
},
{
"first": "Y",
"middle": [
"I"
],
"last": "Song",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "430--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G.W. You, S.W. Hwang, Y.I. Song, L. Jiang, Z. Nie. 2010. Mining Name Translations from Entity Graph Mapping. EMNLP-2010, pp. 430-439.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Pivot Approach for Extracting Paraphrase Patterns from Bilingual Corpora",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "780--788",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Zhao, H. Wang, T. Liu, S. Li. 2008. Pivot Approach for Extracting Paraphrase Patterns from Bilingual Corpora. ACL-08: HLT, pp. 780-788.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Pseudo code for transliteration mining",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Precision (y-axis) on test and validation sets for varying values of a (x-axis) for the 1k set 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Recall (y-axis) on test and validation setsfor varying values of \u03b1 (x-axis)for the 1k set",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Precision (y-axis) on test and validation sets for varying values of \u03b1 (x-axis) for the 10k setFigure 5. Recall (y-axis) on test and validation sets for varying values of \u03b1 (x-axis) for the 10k set Figures 4 and 5 plot the same for the 10k pair training set. The precision and recall values on the validation sets are indicative of their behavior on the test set. Due to the difference in training data sizes, the best values of \u03b1 were significantly larger for the 10k dataset compared to the 1k dataset.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF5": {
"text": "Pseudo code for the fragment extraction algorithm",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td/><td>Baseline</td><td>Reinforcement</td></tr><tr><td/><td>P</td><td>0.988</td><td>0.977</td></tr><tr><td>NEWS-1k</td><td>R</td><td>0.583</td><td>0.912</td></tr><tr><td/><td>F</td><td>0.733</td><td>0.943</td></tr><tr><td/><td>P</td><td>0.917</td><td>0.689</td></tr><tr><td>NEWS-10k</td><td>R</td><td>0.759</td><td>0.960</td></tr><tr><td/><td>F</td><td>0.787</td><td>0.802</td></tr></table>"
},
"TABREF2": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td/><td/><td/><td>\u03b1</td></tr><tr><td/><td/><td colspan=\"3\">NEWS-1k NEWS-10k</td></tr><tr><td>Baseline</td><td/><td>0.757</td><td/><td>0.787</td></tr><tr><td colspan=\"2\">Reinforcement (\u03b1=0)</td><td>0.941</td><td/><td>0.802</td></tr><tr><td colspan=\"2\">Estimated \u03b1</td><td>0.3</td><td/><td>6.0</td></tr><tr><td colspan=\"2\">@ Estimated \u03b1</td><td>0.935</td><td/><td>0.963</td></tr><tr><td colspan=\"2\">Optimal \u03b1 (on test)</td><td>0.1</td><td/><td>6.0</td></tr><tr><td>@ optimal \u03b1</td><td/><td>0.943</td><td/><td>0.963</td></tr><tr><td colspan=\"5\">Table 3. Results for training using 1k training set</td></tr><tr><td/><td/><td>P</td><td>R</td><td>F1</td></tr><tr><td>Baseline</td><td/><td colspan=\"3\">0.975 0.619 0.757</td></tr><tr><td colspan=\"2\">Reinforcement (\u03b1=0)</td><td colspan=\"3\">0.975 0.912 0.941</td></tr><tr><td>@ estimated \u03b1</td><td/><td colspan=\"3\">0.980 0.894 0.935</td></tr><tr><td colspan=\"5\">Table 4. Results for training using 10k training set</td></tr><tr><td/><td/><td>P</td><td>R</td><td>F1</td></tr><tr><td>Baseline</td><td/><td colspan=\"3\">0.917 0.759 0.787</td></tr><tr><td colspan=\"2\">Reinforcement (\u03b1=0)</td><td colspan=\"3\">0.689 0.960 0.802</td></tr><tr><td>@ estimated \u03b1</td><td/><td colspan=\"3\">0.976 0.948 0.963</td></tr><tr><td colspan=\"5\">tested TM using the 1,000 training pairs from</td></tr><tr><td colspan=\"5\">the ACL-NEWS workshop on the longest 30</td></tr><tr><td colspan=\"5\">English Wikipedia articles with equivalent Arabic</td></tr><tr><td colspan=\"5\">Wikipedia articles. The test articles had the</td></tr><tr><td colspan=\"2\">following properties:</td><td/><td/></tr><tr><td colspan=\"5\">Max. Len Min. Len Avg. Len</td></tr><tr><td>Arabic</td><td>10,165</td><td>1,837</td><td/><td>3,614</td></tr><tr><td>English</td><td>10,710</td><td>3,133</td><td/><td>4,896</td></tr><tr><td colspan=\"5\">The article pairs had 64.7 transliterations on</td></tr><tr><td>average (with</td><td/><td/><td/></tr></table>"
},
"TABREF3": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td>Baseline</td><td>SOUNDEX</td><td>Reinforcement</td></tr><tr><td>P</td><td>0.610</td><td>0.059</td><td>0.650</td></tr><tr><td>R</td><td>0.415</td><td>0.402</td><td>0.500</td></tr><tr><td>F</td><td>0.494</td><td>0.103</td><td>0.565</td></tr></table>"
},
"TABREF4": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td>Baseline</td><td>SOUNDEX</td><td>Reinforcement</td></tr><tr><td>P</td><td>0.962</td><td>0.524</td><td>0.946</td></tr><tr><td>R</td><td>0.322</td><td>0.418</td><td>0.417</td></tr><tr><td>F</td><td>0.482</td><td>0.465</td><td>0.579</td></tr></table>"
}
}
}
}