ACL-OCL / Base_JSON /prefixY /json /Y10 /Y10-1041.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y10-1041",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:40:25.361901Z"
},
"title": "Mitigating Problems in Analogy-based EBMT with SMT and vice versa: a Case Study with Named Entity Transliteration *",
"authors": [
{
"first": "Sandipan",
"middle": [],
"last": "Dandapat",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Next Generation Localisation Dublin City University Glasnevin",
"institution": "",
"location": {
"settlement": "Dublin 9",
"country": "Ireland"
}
},
"email": "sdandapat@computing.dcu.ie"
},
{
"first": "Sara",
"middle": [],
"last": "Morrissey",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Next Generation Localisation Dublin City University Glasnevin",
"institution": "",
"location": {
"settlement": "Dublin 9",
"country": "Ireland"
}
},
"email": ""
},
{
"first": "Kumar",
"middle": [],
"last": "Naskar",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Next Generation Localisation Dublin City University Glasnevin",
"institution": "",
"location": {
"settlement": "Dublin 9",
"country": "Ireland"
}
},
"email": "snaskar@computing.dcu.ie"
},
{
"first": "Harold",
"middle": [],
"last": "Somers",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Next Generation Localisation Dublin City University Glasnevin",
"institution": "",
"location": {
"settlement": "Dublin 9",
"country": "Ireland"
}
},
"email": "hsomers@computing.dcu.ie"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Five years ago, a number of papers reported an experimental implementation of an Example Based Machine Translation (EBMT) system using proportional analogy. This approach, a type of analogical learning, was attractive because of its simplicity; and the paper reported considerable success with the method using various language pairs. In this paper, we describe our attempt to use this approach for tackling English-Hindi Named Entity (NE) Transliteration. We have implemented our own EBMT system using proportional analogy and have found that the analogy-based system on its own has low precision but a high recall due to the fact that a large number of names are untransliterated with the approach. However, mitigating problems in analogy-based EBMT with SMT and vice-versa have shown considerable improvement over the individual approach.",
"pdf_parse": {
"paper_id": "Y10-1041",
"_pdf_hash": "",
"abstract": [
{
"text": "Five years ago, a number of papers reported an experimental implementation of an Example Based Machine Translation (EBMT) system using proportional analogy. This approach, a type of analogical learning, was attractive because of its simplicity; and the paper reported considerable success with the method using various language pairs. In this paper, we describe our attempt to use this approach for tackling English-Hindi Named Entity (NE) Transliteration. We have implemented our own EBMT system using proportional analogy and have found that the analogy-based system on its own has low precision but a high recall due to the fact that a large number of names are untransliterated with the approach. However, mitigating problems in analogy-based EBMT with SMT and vice-versa have shown considerable improvement over the individual approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "modest success using PA for a full translation task, the idea is adapted to translating unknown words in the context of another approach to MT as reported by Denoual (2007) , Langlais and Patry (2007) , and Langlais et al. (2009) . Denoual's (2007) experiments attempt to translate all unknown words in a Japanese-English task and have reported that translation adequacy (NIST score) improves but fluency (BLEU score) remains stable or is decreased. Langlais and Patry (2007) had more success in handling unknown words while the language pairs are quite close in morphological structure. Langlais and Yvon (2008) use PA to supplement the words and phrases for standard SMT when a word to be translated is not covered by the statistical model. Finally, Langlais et al. (2009) applied the method to the translation of medical terms and showed little improvement on purely statistical approaches. Since no off-the-shelf implementation is available for solving analogies, we have implemented our own EBMT system from scratch using PA based on the description in Lepage (2005c) . It is often the case that a PAbased system suffers from low recall. First we try to improve the PA-based system by introducing new heuristics to overcome the low recall. Furthermore, we have improved the system accuracy using an SMT-based system with the PA-based system.",
"cite_spans": [
{
"start": 158,
"end": 172,
"text": "Denoual (2007)",
"ref_id": "BIBREF0"
},
{
"start": 175,
"end": 200,
"text": "Langlais and Patry (2007)",
"ref_id": "BIBREF3"
},
{
"start": 207,
"end": 229,
"text": "Langlais et al. (2009)",
"ref_id": "BIBREF5"
},
{
"start": 232,
"end": 248,
"text": "Denoual's (2007)",
"ref_id": "BIBREF0"
},
{
"start": 450,
"end": 475,
"text": "Langlais and Patry (2007)",
"ref_id": "BIBREF3"
},
{
"start": 588,
"end": 612,
"text": "Langlais and Yvon (2008)",
"ref_id": "BIBREF4"
},
{
"start": 752,
"end": 774,
"text": "Langlais et al. (2009)",
"ref_id": "BIBREF5"
},
{
"start": 1058,
"end": 1072,
"text": "Lepage (2005c)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2 we describe the underlying process of EBMT using PA. Section 3 describes our implementation and different heuristics used for solving analogies. Section 4 describes a number of experiments carried out and empirically compares results against a standard SMT system with error analysis for the NE transliteration task. We conclude in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "PAs are global relationships between four objects -: :: :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proportional Analogy",
"sec_num": "2.1"
},
{
"text": "A B C D , read as \"A is to B as C is to D\". The symbol '::' is sometimes replaced with an equals sign (=) to denote an equation. This formulation as an equation can have zero, one, or more solutions if any of the objects (usually D) is considered as a variable. PAs are often seen as a way of knowledge representation in Artificial Intelligence due to their power to represent world knowledge and the lexical relation encoded in them. In NLP, the analogies are used as an instrument to explain inflectional and derivational morphology (Lepage, 1998) . Lepage and Denoual (2005a,b,c) showed how an EBMT system can be built based on the algorithm proposed by Lepage (1998) . Treating a sentence as a string of characters, they note that PAs can be handled as in (1).",
"cite_spans": [
{
"start": 535,
"end": 549,
"text": "(Lepage, 1998)",
"ref_id": "BIBREF6"
},
{
"start": 552,
"end": 582,
"text": "Lepage and Denoual (2005a,b,c)",
"ref_id": null
},
{
"start": 657,
"end": 670,
"text": "Lepage (1998)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proportional Analogy",
"sec_num": "2.1"
},
{
"text": "(1) They swam in the sea : They swam across the river :: It floated in the sea : It floated across the river",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EBMT using Analogy",
"sec_num": "2.2"
},
{
"text": "To build up an EBMT system, we must assume a database of example pairs, where each pair is a source and target language translation equivalent. For the first three sentences in (1), the translation equivalents in Spanish are given in (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EBMT using Analogy",
"sec_num": "2.2"
},
{
"text": "(2) a. Nadar\u00f3n en el mar. b. Atraversar\u00f3n el r\u00edo nadando. c. Flot\u00f3 en el mar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EBMT using Analogy",
"sec_num": "2.2"
},
{
"text": "Suppose now that we want to translate the sentence It floated across the river. The translation process is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EBMT using Analogy",
"sec_num": "2.2"
},
{
"text": "1. Find a pair (A, B) of sentences in the example set that satisfies the PA in (3). 2. Take the translations corresponding to A, B and C (noted A\u2032, B\u2032, C\u2032).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EBMT using Analogy",
"sec_num": "2.2"
},
{
"text": "x represents the desired translation. (4) A\u2032 : B\u2032 :: C\u2032 : x Substituting the three sentences in (2) into (4), we have a solvable equation with x = Atraversar\u00f3 el r\u00edo flotando, which is an acceptable translation. However, due to the unconstrained nature of PA, there is always a possibility of solving \"false analogies\", i.e. set of strings for which the analogy holds, but which do not represent a valid linguistic relationship. Example (5) illustrates this phenomenon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solve the equation in (4):",
"sec_num": "3."
},
{
"text": "(5) Yea : Yep :: At five a.m. : At five p.m.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solve the equation in (4):",
"sec_num": "3."
},
{
"text": "An EBMT system using PAs shows quadratic time complexity. Thus, we only look for timebounded solutions, i.e. allow the process to continue for a fixed amount of time. We also apply heuristics (describe in section 3.1) to filter out some of the PAs and to try better candidates first by ranking the equations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solve the equation in (4):",
"sec_num": "3."
},
{
"text": "We have implemented the EBMT system using PAs based on Lepage (1998 Lepage ( , 2005c . We distinguish between three main components in our system. These three components are used to solve both source-and target-side analogies. The three main components of the analogy-based EBMT, namely Heuristics, Analogy Verifier and Analogy Solver, are depicted in Figure 1 . Firstly, the system requires some knowledge about choosing relevant <A, B> pairs from the example-base to ensure that the better candidate analogical equations from the potential set of all possible analogies are solved first, and also to filter out some of the unsolvable analogies before verification. We adopt different heuristics to ensure this. Secondly, there is an Analogy Verifier, which decides the solvability of an analogical equation. The third component solves the analogy as in (4) based on the triplet <A, B, D> and produces C. Note that D is the input sentence to be translated. We call this module the Analogy Solver. Once C is produced in the source side, we find the translation equivalents <A\u2032, B\u2032, C\u2032> in the target side for the source side <A, B, C> triplet. Further, we apply the three components in the target side in the same order to obtain one candidate translation D\u2032 as in (4). Collecting all D\u2032, we rank them by frequencies as different analogical equations might produce identical solutions.",
"cite_spans": [
{
"start": 55,
"end": 67,
"text": "Lepage (1998",
"ref_id": "BIBREF6"
},
{
"start": 68,
"end": 84,
"text": "Lepage ( , 2005c",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3"
},
{
"text": "We adopted different heuristics from the literature to understand their relative performance in translation tasks under the time-constrained model. Note that when no heuristic is applied, to transliterate one input in our NE transliteration task, the average number of analogical equations within 1 second are around 600K equations in the source side and 40K equations in the target side. Out of these 40K target-side equations, the average number of analogical equations that generates the final solution is 0.692 only. As we will see, the various heuristics affect the number of equations solved or attempted, ideally cutting down effort wasted on computations which will not contribute to a useful solution. The heuristics do this in different ways, and with varying success. We first choose the heuristic from Lepage and Denoual (2005a,c) which selects a relevant pair <A, B> based on a length comparison with the input D.",
"cite_spans": [
{
"start": 814,
"end": 842,
"text": "Lepage and Denoual (2005a,c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics",
"sec_num": "3.1"
},
{
"text": "Consider as candidates only sentences whose length is more than half and less than double the length of the input sentence. Formally, |D|/2 \u2264 |A|, |B|\u22642|D|, where |x| is the length of x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H1:",
"sec_num": null
},
{
"text": "With the help of H1, we are able to solve around 705K analogical equations in the source side and around 34K equations in the target side in 1 second. This heuristic is solving more equations on the source side but effectively reducing the number on the target side and the average number of equations that produce output is 0.335. This is reflected in the overall output of the experiments shown later in Table 1 (in Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H1:",
"sec_num": null
},
{
"text": "Our second heuristic is based on that of Lepage and Lardilleux (2007) , which speeds up the process of searching relevant <A, B> pairs. This is done by sorting the corpus based on the sentence to be translated (D), using edit distance for the selection of As, and selecting Bs based on inclusion score (Lepage, 1998:730) i.e. length of B minus its similarity to D.",
"cite_spans": [
{
"start": 41,
"end": 69,
"text": "Lepage and Lardilleux (2007)",
"ref_id": "BIBREF11"
},
{
"start": 302,
"end": 320,
"text": "(Lepage, 1998:730)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "H1:",
"sec_num": null
},
{
"text": "Consider as candidates primarily sentence pairs where A has a low edit distance w.r.t. D, and B has low inclusion score w.r.t. D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H2:",
"sec_num": null
},
{
"text": "We are able to solve around 788K and 42K analogical equations with the help of H2 within 1 second. We found that with this heuristic, the average number of analogical equations that lead to output are 176. Thus, this is expected to work well with our current experimental setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H2:",
"sec_num": null
},
{
"text": "In the third heuristic, we adopt a \"trick\" described by Langlais and Yvon (2008) , called S-TRICK based on a simple requirement of sharing the same first or last symbol.",
"cite_spans": [
{
"start": 56,
"end": 80,
"text": "Langlais and Yvon (2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "H2:",
"sec_num": null
},
{
"text": "Consider a candidate pair where A shares the same first or last character with B or D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H3:",
"sec_num": null
},
{
"text": "[1 ] { [1 ] , [1 ] } a n d [ $ ] [ [ $ ] , [ $ ] } A B D A B D \u2208 \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H3:",
"sec_num": null
},
{
"text": "where S[1] and S [$] are respectively the first and last character in the string S.",
"cite_spans": [
{
"start": 17,
"end": 20,
"text": "[$]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "H3:",
"sec_num": null
},
{
"text": "The average numbers of source-and target-side analogical equations solved within 1 second with the help of H3 are around 791K and 10K respectively and the average number of analogical equations which produce output is 8.75.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H3:",
"sec_num": null
},
{
"text": "Our fourth heuristic relates to the effort of solving target-side analogical equations A\u2032 : B\u2032 :: C\u2032 : x based on Langlais and Yvon's (2008) character count property, called T-TRICK. Formally, it can be stated as: H4: Whenever a symbol occurs more frequently in A\u2032 than it does in B\u2032 and C\u2032, the analogical equation is bound to fail and need not be solved. [ : :: : ] if",
"cite_spans": [
{
"start": 114,
"end": 140,
"text": "Langlais and Yvon's (2008)",
"ref_id": "BIBREF4"
},
{
"start": 357,
"end": 367,
"text": "[ : :: : ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "H3:",
"sec_num": null
},
{
"text": "| | | | | | { , , } c c c A B C x A B C c A B C \u03c6 \u2032 \u2032 \u2032 \u2032 \u2032 \u2032 \u2032 \u2032 \u2032 \u2260 \u2264 + \u2200 \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H3:",
"sec_num": null
},
{
"text": "The average numbers of source-and target-side analogical equations solved within 1 second with the help of H4 are around 703K and 33K respectively and the average number of analogical equations which produce output is 0.382.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H3:",
"sec_num": null
},
{
"text": "Our final heuristic is based on the modification of H2. Here also, we speed up the process of searching relevant <A, B> pairs. We choose <A, B> pairs based on a smaller edit distance with respect to the input sentence to be translated (D). This is done by sorting the examples based on the edit distance with respect to D and choosing the top two candidates as the <A, B> pair from the sorted examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H3:",
"sec_num": null
},
{
"text": "Consider as candidate pair where A and B have a low edit distance w.r.t. D such that A \u2260 B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H5:",
"sec_num": null
},
{
"text": "We are able to solve around 673K and 10K analogical equations with the help of H5 within 1 second. However, we found that with this heuristic, the average number of analogical equations that lead to output are 1900 (was 176 with H2). Thus, this is expected to work best with our current experimental setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "H5:",
"sec_num": null
},
{
"text": "We have tested our EBMT system using PA for a NE transliteration task from English to Hindi. Five different experiments were conducted based on our EBMT system using PA. We shall call these analogy-based EBMT (AEBMT).The five experiments deal with the five different heuristics described in 3.1. Each of these five experiments was also tested with a time bound of one second and three seconds to understand the effect of time while using analogy-based system. In parallel, we have also used MaTrEx (Stroppa and Way, 2006) , an open source statistical MT (SMT) system in order to estimate the relative performance of the models. Furthermore, the SMT system can be trained at character-, syllable-and word-level using appropriate examplebases.",
"cite_spans": [
{
"start": 498,
"end": 521,
"text": "(Stroppa and Way, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We have found that there are cases where AEBMT correctly produces the transliteration but SMT fails and vice versa. In order to further improve the transliteration accuracy, we use a combination of AEBMT with SMT. We combine these two systems in two ways. We assume that the transliteration of a word w produced by AEBMT and SMT are respectively T AEBMT (w) and T SMT (w). First, we back-off to the SMT system when AEBMT fails to transliterate some names to mitigate the problem of AEBMT with SMT (AEBMT+SMT). In order to do that, we combine the outputs of both the systems in order T AEBMT (w)+T SMT (w), which automatically takes back-off when T AEBMT (w)=null. On the other hand taking SMT as the base system, we collect transliteration from the AEBMT system to deal with the problem of SMT using analogy (SMT+AEBMT) with the ordered concatenation of T SMT (w)+ T AEBMT (w). Thus we have four systems (AEBMT, SMT, AEBMT+SMT, SMT+AEBMT) that are tested with five heuristics (H1, H2, H3, H4 and H5) and a situation when no heuristics are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "All the experiments are conducted with the NEWS 2009 English-Hindi transliteration data (Kumaran and Kellner, 2007) . The data consists of 10,000 NEs for training and 1,000 names for testing. The same examples are represented in three different ways as in word level {spingarn -\u0938\u093f\u092a\u0928\u0917\u093e\u0930\u0928}, syllable level {spi nga rn -\u0938\u093f\u092a \u0928 \u0917\u093e \u0930\u0928} and in character level {s p i n g a r n -\u0938 \u092a \u093f\u25cc \u0928 \u0917 \u25cc\u093e \u0930 \u0928}. All the experiments were tested on character, syllable and word level NEs as examplebase.",
"cite_spans": [
{
"start": 88,
"end": 115,
"text": "(Kumaran and Kellner, 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Used for the Experiment",
"sec_num": "4.1"
},
{
"text": "We have evaluated the system with the NEWS'09 metrics (Li et al., 2009) . The accuracy is defined as the ratio of correct transliteration in the first position to the total number of words to be transliterated. Table 1 summarizes the final accuracy achieved by different methods varying the allowable running time to transliterate a single name. Note that the SMT baseline accuracies are 31.8%, 36.2% and 8.7% respectively for the character, syllable and word-level models. The highest accuracies achieved with EBMT using analogy are 28.9%, 30.3% and 29.3% respectively for the character, syllable and word-level models with the H5 heuristic. However, while combining the SMT with AEBMT (AEBMT+SMT) the highest accuracies obtained are 36.0%, 37.1% and 29.3% with a percentage of relative improvement over the baseline (SMT) of 13.2%, 2.5% and 236.8% respectively for the character, syllable and word-level models. Hereafter all the improvements essentially denote relative improvements. On the other hand, a combination of AEBMT with the SMT (SMT+AEBMT) shows results of 31.8%, 37.7% and 29.6% accuracy with the improvement of 0%, 4.1% and 240.2% respectively for three models.",
"cite_spans": [
{
"start": 54,
"end": 71,
"text": "(Li et al., 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 211,
"end": 218,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We found that AEBMT has lower accuracy on its own for the character-and syllable-level model for the transliteration task. However, the word-level AEBMT models have a huge improvement over the SMT based models. The claim might be insignificant when transliterating NEs as a task on its own, as other models (character and syllable level) have higher accuracy. However, in the case of full text translation, SMT models are trained at the word/phrase level so can only transliterate names that are seen in the corpus. A similar effect has been observed in case of our word-level NE transliteration experiment. On the contrary, our AEBMT models inherently consider every word/sentence as a string of characters. Thus, a significant improvement has been obtained which might be relevant for considering an analogy-based MT system to address unknown words in the standard phrase-based SMT system. Another significant observation is that AEBMT accuracy increases when a longer running time is allowed for the transliteration. This essentially solves more analogical equations which produce correct solutions for more NEs. This effect has been observed for all the heuristics applied in our system. Furthermore, we conducted experiments allowing a running time of 10 seconds, and we found significant improvement with AEBMT other than H5 but there is no improvement for the combined systems (AEBMT+SMT, SMT+AEBMT). The H5 heuristic is possibly able to capture the solvable analogy within 3 seconds thus there is no improvement with a running time of 10 seconds. Figure 2(a) , shows the improvement in accuracy over time with AEBMT while H5 heuristic is in use. It is interesting to note that the use of heuristics improves the performance of the analogy-based MT with the exception of H1 and H4 heuristics. This is because some of the valid analogies are filtered out by the risky strategy of the heuristics which discount some <A, B> pairs as in (6). A combination of SMT with AEBMT (AEBMT+SMT), gives an improvement of 13.2%, 2.5% and 236.8% respectively for the character, syllable and word-level models compared to the standard SMT. More closely, we have seen improvement with AEBMT+SMT in the characterbased model with all the heuristics compared to both AEBMT and SMT. However, the syllablelevel model shows huge improvement (minimum of 51.9%) with AEBMT+SMT compared to AEBMT alone but only in two cases (no heuristic and H5) we have found a small improvement (0.8% and 2.5%) over SMT . Figure 2(b) gives a comparison of the total number of NEs transliterated, the number of NEs correctly transliterated irrespective of their rank in the output list and the number of NEs correctly transliterated at first position. Although, H2 is much better in all aspects over no heuristics, the percentage of names correctly transliterated at top position by H2 (30%) is much lower in comparison with no heuristics (42.5%). Thus we have seen in the combined system (AEBMT+SMT) no heuristics has a little improvement compared to H2. However, the H5 reduces the number of false positives and shows improvements for all possible combinations. Finally, while combining AEBMT with SMT (SMT+AEBMT), if no transliteration is produced by SMT, back-off transliteration is taken using AEBMT. This gives an improvement of 4.1% and 240.2% in case of syllable and word-level models respectively over the standard SMT model. However, this combination has no improvement for the character-level model compared to the SMT model irrespective of the heuristic in use. This can be explained by the fact that in character-level SMT models observe all possible characters of the language when a reasonable amount of example-base is used to train the system. Thus, the character-based system produces transliterations for all the NEs and no back-off is being taken from the AEBMT system. Furthermore, in syllable and word-level SMT+AEBMT systems, similar effects of heuristics have been noticed as in the case of AEBMT+SMT. Heuristic H2 is better than H3 or no heuristics within the AEBMT system but has no or a negative effect in combination with SMT. The H5 heuristic has improvements for all with character-, syllableand word-based models in case of SMT+AEBMT.",
"cite_spans": [],
"ref_spans": [
{
"start": 1555,
"end": 1566,
"text": "Figure 2(a)",
"ref_id": "FIGREF2"
},
{
"start": 2487,
"end": 2498,
"text": "Figure 2(b)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Observations",
"sec_num": "4.3"
},
{
"text": "The most common types of error encountered by the AEBMT models is that the correct output is often produced but not always in the first position. As we have seen in Figure 2(b) , the heuristic H5, only 30.2% NEs are correctly transliterated with highest frequency in the output list although a total 42% NEs are transliterated correctly irrespective of their position in the output list. Thus, without this effect, the AEBMT could have been much better compared to the SMT models on its own.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 176,
"text": "Figure 2(b)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Assessment of Error Types",
"sec_num": "4.4"
},
{
"text": "The second type of error is spelling variation in the reference data. For example, the English input NE 'edinburgh' can be written as '\u0910\u093f\u0921\u0928\u092c\u0917\u0930\u094d ' or '\u090f\u0921\u0940\u0928\u092c\u0917\u0930\u094d ' in Hindi. The matra '\u093f\u25cc' becomes '\u25cc\u0940'. With our system, we are able to produce '\u0910\u093f\u0921\u0928\u092c\u0917\u0930\u094d ' but the reference translation has '\u090f\u0921\u0940\u0928\u092c\u0917\u0930\u094d ', thus resulting in an incorrect transliteration. We have found 46 (4.6%) such cases where the output differs from the reference due to this spelling variation. Capturing these spelling variations could have increased the absolute accuracy by 4.6%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment of Error Types",
"sec_num": "4.4"
},
{
"text": "Finally, we have seen cases where there is a tie in the top frequency of the output list. We choose one randomly in such cases. The effect is shown in Table 2 for the NEs 'pratima' and 'bhutti'. In the case of 'pratima', the correct output as per the reference data is '\u092a\u094d\u0930\u093f\u0924\u092e\u093e' although all the three outputs have the same frequency of 1. In case of 'bhutti', there are two outputs which have the same frequency of 6 and '\u092d\u091f\u094d\u091f\u0940 \u0941 ' is the correct output. ",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Assessment of Error Types",
"sec_num": "4.4"
},
{
"text": "Unlike other approaches to EBMT, the PA-based approach seems to suffer badly when the size of the example-base is increased, with both processing times and numbers of solutions increasing. It is clear that heuristics must be introduced to handle the underlying problems of analogy-based translation. In this paper, we have described the effect of heuristics for automatic transliteration of NEs using analogy-based EBMT. We have found that the analogy-based approach has less recall (unable to find any solution in many cases) compared to SMT. However, a combination of both has achieved a reasonably good improvement over the individual model. The approach seems to have difficulty as a stand-alone translation model; its use for the special case of unknown words, particularly NEs seems much more promising.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Analogical translation of unknown words in a statistical machine translation framework",
"authors": [
{
"first": "E",
"middle": [],
"last": "Denoual",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the MT Summit XI",
"volume": "",
"issue": "",
"pages": "135--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denoual, E. 2007. Analogical translation of unknown words in a statistical machine translation framework. In Proceedings of the MT Summit XI, Copenhagen, Denmark, pp.135-141.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Name translation in statistical machine translation learning when to transliterate",
"authors": [
{
"first": "U",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Annual Meeting of the Association of Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "389--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hermjakob, U., Knight, K. and Daume III, H. 2008. Name translation in statistical machine translation learning when to transliterate. In Proceedings of the Annual Meeting of the Association of Computational Linguistics (ACL), Columbus, Ohio, pp. 389-397.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A generic framework for machine translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kumaran",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kellner",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 30 th Annual International SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "721--722",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumaran, A. and Kellner, T. 2007. A generic framework for machine translation. In Proceedings of the 30 th Annual International SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands, pp. 721-722.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Translating unknown words using analogical learning",
"authors": [
{
"first": "P",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Patry",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "877--886",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Langlais, P. and Patry, A. 2007. Translating unknown words using analogical learning. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL-2007, Prague, Czech Republic, pp. 877-886.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Scaling up analogical learning",
"authors": [
{
"first": "P",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22 nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "51--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Langlais, P. and Yvon, F. 2008. Scaling up analogical learning. In Proceedings of the 22 nd International Conference on Computational Linguistics (COLING, 2008), Manchester, UK, pp. 51-54.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improvements in analogical learning: application to translating multi-terms of the medical domain",
"authors": [
{
"first": "P",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Yvon",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "487--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Langlais, P., Yvon, F. and Zweigenbaum, P. 2009. Improvements in analogical learning: application to translating multi-terms of the medical domain. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL, 2009), Athens, Greece, pp. 487-495.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Solving analogies on words: an algorithm",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lepage",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36 th Annual Meeting of the Association of Computational Linguistics and 17 th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "728--734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lepage, Y. 1998. Solving analogies on words: an algorithm. In Proceedings of the 36 th Annual Meeting of the Association of Computational Linguistics and 17 th International Conference on Computational Linguistics, Quebec, Canada, pp. 728-734.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Lower and higher estimates of the number of \"true analogies\" between sentences contained in a large multilingual corpus",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lepage",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "736--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lepage, Y. 2004. Lower and higher estimates of the number of \"true analogies\" between sentences contained in a large multilingual corpus. In Proceedings of the 20th International Conference on Computational Linguistics, Geneva, Switzerland, pp. 736-742.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The 'purest' EBMT system ever built: no variables, no templates, no training, examples, just examples, only examples",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lepage",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Denoual",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the MT Summit X, Second Workshop on Example-Based Machine Translation",
"volume": "",
"issue": "",
"pages": "81--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lepage, Y. and Denoual, E. 2005a. The 'purest' EBMT system ever built: no variables, no templates, no training, examples, just examples, only examples. In Proceedings of the MT Summit X, Second Workshop on Example-Based Machine Translation, Phuket, Thailand, pp. 81-90.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "ALEPH: an EBMT system based on the preservation of proportional analogies between sentences across languages",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lepage",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Denoual",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the International Workshop on Spoken Language Translation: Evaluation Campaign on Spoken Language Translation (IWSLT 2005)",
"volume": "",
"issue": "",
"pages": "47--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lepage, Y. and Denoual, E. 2005b. ALEPH: an EBMT system based on the preservation of proportional analogies between sentences across languages. In Proceedings of the International Workshop on Spoken Language Translation: Evaluation Campaign on Spoken Language Translation (IWSLT 2005), Pittsburgh, Pennysylvania, pp. 47-54.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Purest ever example-based machine translation: Detailed presentation and assessment",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lepage",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Denoual",
"suffix": ""
}
],
"year": 2005,
"venue": "Machine Translation",
"volume": "19",
"issue": "",
"pages": "251--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lepage, Y. and Denoual, E. 2005c. Purest ever example-based machine translation: Detailed presentation and assessment. Machine Translation 19:251-282.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The GREYC machine translation system for the evaluation campaign",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lepage",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lardilleux",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the International Workshop on Spoken Language Translation (IWSLT 2007)",
"volume": "",
"issue": "",
"pages": "49--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lepage, Y. and Lardilleux, A. 2007. The GREYC machine translation system for the evaluation campaign. In Proceedings of the International Workshop on Spoken Language Translation (IWSLT 2007), Trento, Italy, pp. 49-53.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Whitepaper of NEWS 2009 machine transliteration shared task",
"authors": [
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kumaran",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Pervouchine",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, H., Kumaran, A., Zhang, M. and Pervouchine, V. 2009. Whitepaper of NEWS 2009 machine transliteration shared task. In Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009), Suntec, Singapore, pp. 19-26.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "MaTrEx: the DCU machine translation system for IWSLT",
"authors": [
{
"first": "N",
"middle": [],
"last": "Stroppa",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2006,
"venue": "Procedings of the International Workshop on Spoken Language Translation (IWSLT 2006)",
"volume": "",
"issue": "",
"pages": "31--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stroppa, N. and Way A. 2006. MaTrEx: the DCU machine translation system for IWSLT 2006. In Procedings of the International Workshop on Spoken Language Translation (IWSLT 2006), Kyoto, Japan, pp. 31-36.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "(3) A : B :: C(?) : It floated across the river Solving this results in C = It floated in the sea.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Analogy-based EBMT architecture",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "(6) a. He dived. [9 characters] b. He dived into the river. [24 characters] a) The effect of running time (1sec, 3sec, 10 sec) in analogy-based EBMT while H5 heuristic is in use with different models, b) Comparison of the total number of NEs transliterated, the total number of correct transliterations in the candidate output set and the correct number of transliterations at rank 1.",
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td/><td colspan=\"3\">Character Level System Acc (%)</td><td colspan=\"3\">Syllable Level System Acc (%)</td><td colspan=\"3\">Word Level System Acc (%)</td></tr><tr><td colspan=\"3\">Heuristics</td><td/><td>SMT = 31.8</td><td/><td/><td>SMT = 36.2</td><td/><td/><td>SMT = 8.7</td><td/></tr><tr><td/><td/><td/><td>AEBMT</td><td>AEBMT +SMT</td><td>SMT +AEBMT</td><td>AEBMT</td><td>AEBMT +SMT</td><td>SMT +AEBMT</td><td>AEBMT</td><td>AEBMT +SMT</td><td>SMT +AEBMT</td></tr><tr><td>Running Time =</td><td>1s</td><td>No H1 H2 H3 H4 H5</td><td>13.7 9.4 22.2 14.1 9.4 28.1</td><td>32.6 32.3 32.5 32.4 32.2 36.0</td><td>31.8 31.8 31.8 31.8 31.8 31.8</td><td>14.2 13 21.4 15.4 13 30.2</td><td>36.5 35.8 32.6 36.2 35.8 37.1</td><td>36.6 36.6 36.4 36.7 36.6 37.7</td><td>15.7 11.2 20.6 15.3 11.2 28.7</td><td>15.7 14.1 20.6 15.3 14.1 28.7</td><td>17.2 15.3 20.9 15.6 15.3 29.0</td></tr><tr><td>Running Time</td><td>= 3s</td><td>No H1 H2 H3 H4</td><td>16.6 16.1 23.7 18.3 16</td><td>33.1 33 31.9 32.6 33</td><td>31.8 31.8 31.8 31.8 31.8</td><td>17.2 17.1 24.1 15.4 17.2</td><td>35.1 34.7 33.5 36.2 34.8</td><td>36.7 36.7 36.6 36.6 36.7</td><td>17.1 17 23.2 19.3 17.1</td><td>17.1 17 23.2 19.3 17.1</td><td>17.5 17.4 23.5 19.6 17.5</td></tr><tr><td/><td/><td>H5</td><td>28.9</td><td>35.7</td><td>31.8</td><td>30.3</td><td>37.0</td><td>37.6</td><td>29.3</td><td>29.3</td><td>29.6</td></tr></table>",
"type_str": "table",
"text": "transliteration accuracies (in %) with different models using different heuristics and with different allowable running time"
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table><tr><td>Input NE</td><td>Output Transliterations</td></tr><tr><td>Pratima</td><td>\u092a\u094d\u0930\u0924\u0940\u092e\u093e(1), \u092a\u094d\u0930\u093f\u0924\u092e\u093e(1), \u092a\u094d\u0930\u093f\u0924\u092e\u0948 (1)</td></tr><tr><td>bhutti</td><td>\u092d\u093f\u091f\u094d\u091f \u0941 (6), \u092d\u091f\u094d\u091f\u0940 \u0941 (6), \u092d \u25cc\u0941 \u091f\u0940(2), \u092d\u091f\u094d\u091f\u0908(2)</td></tr></table>",
"type_str": "table",
"text": "Example of transliteration with a tie in highest frequency output"
}
}
}
}