ACL-OCL / Base_JSON /prefixC /json /C12 /C12-1048.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C12-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:25:08.913681Z"
},
"title": "S-restricted monotone alignments: Algorithm, search space, and applications",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Goethe University",
"location": {
"addrLine": "Grueneburg-Platz 1",
"postCode": "60323",
"settlement": "Frankfurt, Frankfurt am Main",
"country": "Germany"
}
},
"email": "steffen.eger@yahoo.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a simple and straightforward alignment algorithm for monotone many-to-many alignments in grapheme-to-phoneme conversion and related fields such as morphology, and discuss a few noteworthy extensions. Moreover, we specify combinatorial formulas for monotone many-to-many alignments and decoding in G2P which indicate that exhaustive enumeration is generally possible, so that some limitations of our approach can easily be overcome. Finally, we present a decoding scheme, within the monotone many-to-many alignment paradigm, that relates the decoding problem to restricted integer compositions and that is, putatively, superior to alternatives suggested in the literature.",
"pdf_parse": {
"paper_id": "C12-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a simple and straightforward alignment algorithm for monotone many-to-many alignments in grapheme-to-phoneme conversion and related fields such as morphology, and discuss a few noteworthy extensions. Moreover, we specify combinatorial formulas for monotone many-to-many alignments and decoding in G2P which indicate that exhaustive enumeration is generally possible, so that some limitations of our approach can easily be overcome. Finally, we present a decoding scheme, within the monotone many-to-many alignment paradigm, that relates the decoding problem to restricted integer compositions and that is, putatively, superior to alternatives suggested in the literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Grapheme-to-phoneme conversion (G2P) is the problem of transducing, or converting, a grapheme, or letter, string x over an alphabet \u03a3 x into a phoneme string y over an alphabet \u03a3 y . A crucial first step thereby is finding alignments between grapheme and phoneme strings in training data. The classic alignment paradigm has assumed alignments that were (i) one-to-one or one-to-zero; i.e. one grapheme character is mapped to at most one phoneme character; this assumption has probably been a relic of both the traditional assumptions in machine translation (cf. (Brown et al., 1990) ) and in biological sequence alignment (cf. (Needleman and Wunsch, 1970) ). In the field of G2P such alignment models are also called -scattering models (cf. (Black et al., 1998) ).",
"cite_spans": [
{
"start": 562,
"end": 582,
"text": "(Brown et al., 1990)",
"ref_id": "BIBREF6"
},
{
"start": 627,
"end": 655,
"text": "(Needleman and Wunsch, 1970)",
"ref_id": "BIBREF24"
},
{
"start": 741,
"end": 761,
"text": "(Black et al., 1998)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(ii) monotone, i.e., the order between characters in grapheme and phoneme strings is preserved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is clear that, despite its benefits, the classical alignment paradigm has a couple of limitations; in particular, it may be unable to explain certain grapheme-phoneme sequence pairs, a.o. those where the length of the phoneme string is greater than the length of the grapheme string such as in exact igzaekt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where x has length 5 and y has length 6. In the same context, even if an input pair can be explained, the one-to-one or one-to-zero assumption may lead to alignments that, linguistically, seem nonsensical, such as p h o e n i x fi: n i k s where the reader may verify that, no matter where the is inserted, some associations will always appear unmotivated. Moreover, monotonicity appears in some cases violated as well, such as in the following, centre sent@r where it seems, linguistically, that the letter character r corresponds to phonemic r and graphemic word final e corresponds to @.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Fortunately, better alignment models have been suggested to overcome these problems. For example, (Jiampojamarn et al., 2007) and (Jiampojamarn and Kondrak, 2010) suggest 'manyto-many' alignment models that address issue (i) above. Similar ideas were already present in (Baldwin and Tanaka, 1999) , (Galescu and Allen, 2001) and (Taylor, 2005) . (Bisani and Ney, 2008) likewise propose many-to-many alignment models; more precisely, their idea is to segment grapheme-phoneme pairs into non-overlapping parts ('co-segmentation'), calling each segment a graphone, as in ph oe n i x f i: n i ks which consists of five graphones.",
"cite_spans": [
{
"start": 98,
"end": 125,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 130,
"end": 162,
"text": "(Jiampojamarn and Kondrak, 2010)",
"ref_id": "BIBREF18"
},
{
"start": 270,
"end": 296,
"text": "(Baldwin and Tanaka, 1999)",
"ref_id": "BIBREF2"
},
{
"start": 299,
"end": 324,
"text": "(Galescu and Allen, 2001)",
"ref_id": "BIBREF14"
},
{
"start": 329,
"end": 343,
"text": "(Taylor, 2005)",
"ref_id": "BIBREF29"
},
{
"start": 346,
"end": 368,
"text": "(Bisani and Ney, 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The purpose of the present paper is to introduce a simple, flexible and general monotone many-to-many alignment algorithm (in Section 3) that competes with the approach suggested in (Jiampojamarn et al., 2007) . 1 Thereby, our algorithm is an intuitive and straightforward generalization of the classical Needleman-Wunsch algorithm for (biological or linguistic) sequence alignment. Moreover, we explore valuable extensions of the presented framework, likewise in Section 3, which may be useful e.g. to detect latent classes in alignments, similar to what has been done in e.g. (Dreyer et al., 2008) . We also mention limitations of our procedure, in Section 4, and discuss the naive brute-force approach, exhaustive enumeration, as an alternative; furthermore, by specifying the search space for monotone many-to-many alignments, we indicate that exhaustive enumeration appears generally a feasible option in G2P and related fields. Next, in Section 6.1 we briefly mention how we perform training for string transductions in the monotone many-to-many alignment case. Then, a second contribution of this work is to suggest an alternative decoding procedure when transducing strings x into strings y, within the monotone many-to-many alignment paradigm (in Section 6.2). We thereby relate the decoding problem to restricted integer compositions, a field in mathematical combinatorics that has received increased attention in the last few years (cf. (Heubach and Mansour, 2004) , (Malandro, 2012) , (Eger, 2012a) ). Finally, we demonstrate the superiority of our approach by applying it to several data sets in Section 7.",
"cite_spans": [
{
"start": 182,
"end": 209,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 578,
"end": 599,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 1448,
"end": 1475,
"text": "(Heubach and Mansour, 2004)",
"ref_id": "BIBREF15"
},
{
"start": 1478,
"end": 1494,
"text": "(Malandro, 2012)",
"ref_id": "BIBREF22"
},
{
"start": 1497,
"end": 1510,
"text": "(Eger, 2012a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It must be mentioned, generally, that we take G2P only as an (important) sample application of monotone many-to-many alignments, but that they clearly apply to other fields of natural language processing as well, such as transliteration, morphology/lemmatization, etc. and we thus also incorporate experiments on morphology data. Moreover, as indicated, we do not question the premise of monotonicity in the current work, but take it as a crucial assumption of our approach, leading to efficient algorithms. Still, 'local non-monotonicities' as exemplified above can certainly be adequately addressed within our framework, as should become clear from our illustrations below (e.g. with higher-order 'steps'). Figure 1 : Monotone paths in two-dimensional lattices corresponding to the monotone alignments between x = phoenix and y = fi:niks given in Section 1. In the left lattice, we have arbitrarily (but suggestively) colored each step in either red or blue.",
"cite_spans": [],
"ref_spans": [
{
"start": 709,
"end": 717,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Denote by the set of integers, by the set of non-negative integers, and by the set of real numbers. Consider the two-dimensional lattice 2 . In 2 , we call an ordered list of pairs (\u03b1 0 , \u03b2 0 ) = (0, 0), . . . , (\u03b1 k , \u03b2 k ) = (m, n) a path from (0, 0) to (m, n), and we call",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S-restricted monotone paths and alignments",
"sec_num": "2"
},
{
"text": "(a i , b i ) := (\u03b1 i , \u03b2 i ) \u2212 (\u03b1 i\u22121 , \u03b2 i\u22121 ), i = 1, . . . , k, steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S-restricted monotone paths and alignments",
"sec_num": "2"
},
{
"text": "Moreover, we call a path \u03bb in the lattice 2 from (0, 0) to (m, n) monotone if all steps (a, b) are non-negative, i.e. a \u2265 0, b \u2265 0, and we call the monotone path \u03bb S-restricted for a subset S of 2 if all steps lie within S, i.e. (a, b) \u2208 S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S-restricted monotone paths and alignments",
"sec_num": "2"
},
{
"text": "Note that S-restricted monotone paths define S-restricted monotone alignments, between strings x and y. For example, the two paths in Figure 1 correspond to the two monotone alignments between x = phoenix and y = fi:niks illustrated above. Thus, we identify S-restricted monotone paths with S-restricted monotone alignments in the sequel. Moreover, note that the set and number of S-restricted monotone paths allow simple recursions. To illustrate, the number T S (m, n) of S-restricted monotone paths from (0, 0) to (m, n) satisifies",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "S-restricted monotone paths and alignments",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T S (m, n) = (a,b)\u2208S T S (m \u2212 a, n \u2212 b),",
"eq_num": "(1)"
}
],
"section": "S-restricted monotone paths and alignments",
"sec_num": "2"
},
{
"text": "with initial condition T S (0, 0) = 1 and T S (m, n) = 0 if m < 0 or n < 0. As will be seen in the next section, under certain assumptions, optimal monotone alignments (or, equivalently, paths) can be found via a very similar recursion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S-restricted monotone paths and alignments",
"sec_num": "2"
},
{
"text": "Let two strings x \u2208 \u03a3 * x and y \u2208 \u03a3 * y be given. Moreover, assume that a set S of allowable steps is specified together with a real-valued similarity function sim : \u03a3 *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "x \u00d7 \u03a3 * y \u2192 between characters of \u03a3 x and \u03a3 y . Finally, assume that the score or value of an S-restricted monotone path \u03bb = (\u03b1 0 , \u03b2 0 ), . . . , (\u03b1 k , \u03b2 k ) is defined additively linear in the similarity of the substrings of x and y corresponding to the steps (a, b) taken, i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(\u03bb) = k i=1 sim(x \u03b1i \u03b1i\u22121+1 , y \u03b2i \u03b2i\u22121+1 ),",
"eq_num": "(2)"
}
],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "where by x \u03b1i \u03b1i\u22121+1 we denote the subsequence x \u03b1i\u22121+1 . . . x \u03b1i of x and analogously for y. Then it is not difficult to see that the problem of finding the path (alignment) with maximal score can be solved efficiently using a very similar (dynamic programming) recursion as in Equation (1), which we outline in Algorithm 1. Moreover, this algorithm is obviously a straightforward generalization of the classical Needleman-Wunsch algorithm, which specifies",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "S as {(0, 1), (1, 0), (1, 1)}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "Note, too, that in Algorithm 1 we include two additional quantities, not present in the original sequence alignment approach, namely, firstly, the 'quality' q of a step (a, b), weighted by a factor \u03b3 \u2208 . This quantity may be of practical importance in many situations. For example, if we specify sim as log-probability (see below), then Algorithm 1 has a 'built-in' tendency to substitute 'smaller', individually more likely steps (a, b) by larger, less likely steps because in the latter case fewer negative numbers are added; if sim assigns strictly positive values, this relationship is reversed. We can counteract these biases by factoring in the per se quality of a given step. Also note that if q is added linearly, as we have specified, then the dynamic programming recursion is not violated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "Secondly, we specify a function L : \u03a3 *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "x \u00d7 \u03a3 * y \u00d7 colors \u2192 , where colors is a finite set of 'colors', that encodes the following idea. Assume that each step (a, b) \u2208 S appears in C, C \u2208 , different 'colors', or states. Then, when taking step (a, b) with color c \u2208 colors (which we denote by the symbol (a, b) c in Algorithm 1), we assess the 'goodness' of this decision by the 'likelihood' L that the current subsequences of x and y selected by the step (a, b) 'belong to'/'are of' color (or state) c. As will be seen below, this allows to very conveniently identify (or postulate) 'latent classes' for character subsequences, while increasing the algorithm's running time only by a constant factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "To summarize our generalizations over the traditional sequence alignment approach, (i) we allow arbitrary non-negative steps S corresponding to S-restricted monotone alignments, (ii) we include a goodness measure q that evaluates the 'quality' of a given step (a, b) \u2208 S taken, and (iii) we color each step in C different colors and assess the goodness of color c for the subsequences of x and y selected by the current step (a, b) as the 'likelihood' L that these subsequences are of color c. Finally, we define the score of a monotone path as an additive linear combination of all three components discussed so that an efficient dynamic programming recursion applies. Note that the algorithm's running time is O(C|S|mn) and is thus linear in the number of colors, the size of S, and the string lengths m and n. 2 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "Algorithm 1 Generalized Needleman-Wunsch (GNW) 1: procedure GNW(x1 . . . xm, y1 . . . yn; S, sim, q, L) 2: Mij \u2190 \u2212\u221e for all (i, j) \u2208 2 such that i < 0 or j < 0 3: M00 \u2190 0 4: for i = 0 . . . m do 5: for j = 0 . . . n do 6: if (i, j) = (0, 0) then 7: Mij \u2190 max (a,b) c \u2208S {Mi\u2212a,j\u2212b + sim(x i i\u2212a+1 , y j j\u2212b+1 ) + \u03b3q(a, b) + \u03c7L (x i i\u2212a+1 , y j j\u2212b+1 ), c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "procedure EM({(xi, yi) | i = 1, . . . , N }; S, T ,\u015d im0,q0,L0) 2: t \u2190 0 3: while t < T do 4: for i = 1 . . . N do 5: (x a i , y a i ) \u2190 GNW(xi, yi; S,\u015d imt,qt,Lt) (x a i , y a i ) denotes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "end for 7:\u015d imt+1,qt+1,Lt+1 \u2190 f ({x a i , y a i | i = 1, . . . , N })",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "The function f extracts (count) updates from the aligned data 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "t \u2190 t + 1 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "end while 10: end procedure As to the similarity measure sim employed in Algorithm 1, a popular choice is to specify it as the (logarithm of the) joint probability of the pair (u, v) \u2208 \u03a3 *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "x \u00d7 \u03a3 * y , but a multitude of alternatives is conceivable here such as the \u03c7 2 similarity, pointwise mutual information, etc. (see for instance the overview in (Hoang et al., 2009) ). Also note that sim(u, v) is usually initially unknown but can be iteratively estimated via application of Algorithm 1 and count estimates in an EM-like fashion (cf. (Dempster et al., 1977) ), see Algorithm 2. 3 As concerns q and L, we can likewise estimate them iteratively from data, specifying their abstract forms via any well-defined (goodness) measures. The associated coefficients \u03b3 and \u03c7 can be optimized on a development set or set exogenously.",
"cite_spans": [
{
"start": 161,
"end": 181,
"text": "(Hoang et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 350,
"end": 373,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF10"
},
{
"start": 394,
"end": 395,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An algorithm for S-restricted monotone alignments",
"sec_num": "3"
},
{
"text": "In the last section, we have specified a polynomial time algorithm for solving the monotonic S-restricted string alignment problem, under the following restriction; namely, we defined the score of an alignment additively linear in the similarities of the involved subsequences. This, however, entails an independence assumption between successive aligned substrings that oftentimes does not seem justified in linguistic applications. If, on the contrary, we specified the score, score(\u03bb), of an alignment \u03bb between strings x and y as e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exhaustive enumeration and alignments",
"sec_num": "4"
},
{
"text": "score(\u03bb) = k i=1 log Pr (x \u03b1i \u03b1i\u22121+1 , y \u03b2i \u03b2i\u22121+1 ) | (x \u03b1i\u22121 \u03b1i\u22122+1 , y \u03b2i\u22121 \u03b2i\u22122+1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exhaustive enumeration and alignments",
"sec_num": "4"
},
{
"text": "(using joint probability as similarity measure) -this would correspond to a 'bigram scoring model' -then Algorithm 1 would not apply.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exhaustive enumeration and alignments",
"sec_num": "4"
},
{
"text": "To address this issue, we suggest exhaustive enumeration as a possibly noteworthy alternative -enumerate all S-restricted monotone alignments between strings x and y, score each of them individually, taking the one with maximal score. This brute-force approach is, despite its simplicity, the most general approach conceivable and works under all specifications of scoring functions. Its practical applicability relies on the sizes of the search spaces for S-restricted monotone alignments and on the lengths of the strings x and y involved. We note the following here. By Equation (1), for the choice S = {(1, 1), (1, 2), (1, 3), (1, 4), (2, 1)}, a seemingly reasonable specification in the context of G2P (see next section), the number T S (n, n) of S-restricted monotone alignments is given as (for explicit formulae for specific S, cf. (Eger, 2012b)) 1, 1, 3, 7, 16, 39, 95, 233, 572, 1406, 3479, 8647 for n = 1, 2, . . . , 12 and e.g. T S (15, 15) = 134, 913. Moreover, for the distribution of letter string and phoneme string lengths we estimate Poisson distributions (cf. (Wimmer et al., 1994) ) with parameters \u00b5 \u2208 as listed in Table 4 for the German Celex (Baayen et al., 1996) , French Brulex (Content et al., 1990) and English Celex datasets, as used in Section 7. As the table and the above numbers show, there are on average only a few hundred or few thousand possible monotone many-to-many alignments between grapheme and phoneme string pairs, for which exhaustive enumeration appears, thus, quite feasible; moreover, given enough data, it usually does not harm much to exclude a few string pairs, for which alignment numbers are too large. ",
"cite_spans": [
{
"start": 858,
"end": 860,
"text": "1,",
"ref_id": null
},
{
"start": 861,
"end": 863,
"text": "3,",
"ref_id": null
},
{
"start": 864,
"end": 866,
"text": "7,",
"ref_id": null
},
{
"start": 867,
"end": 870,
"text": "16,",
"ref_id": null
},
{
"start": 871,
"end": 874,
"text": "39,",
"ref_id": null
},
{
"start": 875,
"end": 878,
"text": "95,",
"ref_id": null
},
{
"start": 879,
"end": 883,
"text": "233,",
"ref_id": null
},
{
"start": 884,
"end": 888,
"text": "572,",
"ref_id": null
},
{
"start": 889,
"end": 894,
"text": "1406,",
"ref_id": null
},
{
"start": 895,
"end": 900,
"text": "3479,",
"ref_id": null
},
{
"start": 901,
"end": 905,
"text": "8647",
"ref_id": null
},
{
"start": 1079,
"end": 1100,
"text": "(Wimmer et al., 1994)",
"ref_id": "BIBREF32"
},
{
"start": 1165,
"end": 1186,
"text": "(Baayen et al., 1996)",
"ref_id": "BIBREF1"
},
{
"start": 1203,
"end": 1225,
"text": "(Content et al., 1990)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 1136,
"end": 1143,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Exhaustive enumeration and alignments",
"sec_num": "4"
},
{
"text": "Choice of the set of steps S is a question of model selection, cf. (Zucchini, 2000) . Several approaches are conceivable here. First, for a given domain of application one might specify a possibly 'large' set of steps \u2126 capturing a preferably comprehensive class of alignment phenomena in the domain. This may not be the best option because it may provide Algorithm 1 with too many 'degrees of freedom', allowing it to settle in unfavorable local optima, and thus may lead to suboptimal alignments (we find appropriate step restriction to have dramatic effects on alignment quality, which we investigate more thoroughly in subsequent research). A better, but potentially very costly, alternative is to exhaustively enumerate all possible subsets S of \u2126, apply Algorithm 1 and/or Algorithm 2, and evaluate the quality of the resulting alignments with any choice of suitable measures such as alignment entropy (cf. (Pervouchine et al., 2009) ), average log-likelihood, Akaike's information criterion (Akaike, 1974) or the like. Another possibility would be to use a comprehensive \u2126, but to penalize unlikely steps, which could be achieved by setting \u03b3 in Algorithm 1 to a 'large' real number and then, in subsequent runs, employ the remaining steps S \u2286 \u2126; we outline this approach in Section 7.",
"cite_spans": [
{
"start": 67,
"end": 83,
"text": "(Zucchini, 2000)",
"ref_id": "BIBREF33"
},
{
"start": 913,
"end": 939,
"text": "(Pervouchine et al., 2009)",
"ref_id": "BIBREF25"
},
{
"start": 998,
"end": 1012,
"text": "(Akaike, 1974)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of S",
"sec_num": "5"
},
{
"text": "Sometimes, specific knowledge about a particular domain of application may be helpful, too. For example, in the field of G2P, we would expect most associations in alignments to be of the type M -to-1, i.e. one or several graphemes encode a single phoneme. This is because it seems reasonable to assume that the number of phonetic units used in language communities typically exceeds the number of units in alphabetic writing systems -26 in the case of the Latin alphabet -so that one or several letters must be employed to represent a single phoneme. There may be 1-to-N or even M -to-N relationships but we would consider these exceptions. In the current work, we choose S = {(1, 1), (2, 1), (3, 1), (4, 1), (1, 2)} for G2P data sets, and for the morphology data sets we either adopt from (Eger, 2012b) or use a comprehensive \u2126 with 'largest' step (2, 2).",
"cite_spans": [
{
"start": 790,
"end": 803,
"text": "(Eger, 2012b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Choice of S",
"sec_num": "5"
},
{
"text": "Decoding is the process of generating\u0177 \u2208 \u03a3 * y given x \u2208 \u03a3 *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "6"
},
{
"text": "x . Below, we explain how we perform this process, within the S-restricted monotone many-to-many alignment framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "6"
},
{
"text": "We first generate monotone many-to-many alignments between string pairs with one of the procedures outlined in Sections 3 and 4. Then, we train a linear chain conditional random field (CRF; see (Lafferty et al., 2001) ) as a graphical model for string transduction on the aligned data. The choice of CRFs is arbitrary; any transduction procedure tr would do, but we decide for CRFs because they generally have good generalization properties. In all cases, we use window sizes of three or four to predict y string elements from x string elements.",
"cite_spans": [
{
"start": 194,
"end": 217,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training a string transduction model",
"sec_num": "6.1"
},
{
"text": "Our overall decoding procedure is as follows. Given an input string x, we exhaustively generate all possible segmentations of x, feed the segmented strings to the CRF for transduction and evaluate each individual resulting sequence of 'graphones' with an n-gram model learned on the aligned data, taking the y string corresponding to the graphone sequence with maximal probability as the most likely transduced string for x. We illustrate in Algorithm 3. As to the size of the search space that this procedure entails, any segmentation It is a simple exercise to show that there are m\u22121 k\u22121 integer compositions of m with k parts, where by m k we denote the respective binomial coefficient. Furthermore, if we put restrictions on the maximal size of parts -e.g. in G2P a reasonable upper bound l on the size of parts would probably be 4 -we have that there are k m\u2212k l integer compositions of m with k parts, each between \u03b1 = 1 and \u03b2 = l, where by k m l+1 we denote the respective polynomial coefficient (Comtet, 1974) . To avoid having to enumerate segmentations for all possible numbers k of segment parts of a given input string x of length m -these would range between 1 and m, entailing m k=1 m\u22121 k\u22121 = 2 m\u22121 possible segmentations in total in the case without upper bound 4 -we additionally train a 'number of parts' prediction model with which to estimate k ask; we call this in short predictor model. To illustrate the number of possible segmentations with a concrete example, if x has length m = 15, a rather large string size given the values in Table 4, there are 2472 , 2598, 1902, 990, 364, 91, 14, 1 possible segmentations of x with k = 8, 9, 10, 11, 12, 13, 14, 15 parts, each between 1 and 4.",
"cite_spans": [
{
"start": 1004,
"end": 1018,
"text": "(Comtet, 1974)",
"ref_id": "BIBREF8"
},
{
"start": 1582,
"end": 1615,
"text": ", 2598, 1902, 990, 364, 91, 14, 1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1556,
"end": 1581,
"text": "Table 4, there are 2472",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Segmentation",
"sec_num": "6.2"
},
{
"text": "For the sake of completeness, we note that our above discussion presumed that there are no 'empty' parts in integer compositions, that is, that all parts in the integer composition 2PKE. abbrechet, entgegentretet, zuziehet z. abzubrechen, entgegenzutreten, zuzuziehen rP. redet, reibt, treibt, verbindet pA. geredet, gerieben, getrieben, verbunden Table 2 : String pairs in morphology data sets 2PKE and rP (omitting 2PIE and 13SIA for space reasons) discussed by (Dreyer et al., 2008) . Changes from one form to the other are in bold (information not given in training). Adapted from (Dreyer et al., 2008) .",
"cite_spans": [
{
"start": 464,
"end": 485,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 585,
"end": 606,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 348,
"end": 355,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segmentation",
"sec_num": "6.2"
},
{
"text": "are integers between 1 and the upper bound l. When converting graphemes to phonemes, we find it unlikely that a sound would be uttered without there being a corresponding letter that gives rise to this sound, 5 i.e. our assumption seems justified. In the general monotone alignment case, however, the zero case would have to be included, e.g. when converting phonemes to graphemes, or in the morphology data sets discussed below, where e.g. segmentations as in \u2205 m a c h t 5 = 0 + 1 + 1 + 1 + 1 + 1 seem justified to convert German third person verb form macht into participle form gemacht. Analogously as above, we find that there are k m l+1 integer compositions of m with k parts, each between 0 and l. To illustrate again, when m = 15, there are 37080, 142749, 831204, 2268332, . . . possible segmentations of x with k = 8, 9, 10, 11, . . . parts, each between 0 and 4. Obviously, these numbers are much larger than those where all parts are \u2265 1, which is problematic not only from the point of view of computing resources but may also affect accuracy results because more alternatives are provided from which to select. Luckily, as illustrated below, it should usually be possible to specify modeling choices where zero parts do not occur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation",
"sec_num": "6.2"
},
{
"text": "We conduct our experiments on three G2P data sets, the German Celex (G-Celex) and French Brulex data set (F-Brulex) taken from the Pascal challenge (van den Bosch et al., 2006) , and the English Celex dataset (E-Celex); and on the four German morphology data sets discussed in (Dreyer et al., 2008 ), which we refer to, in accordance with the named authors, as rP, 2PKE, 13SIA and 2PIE, respectively. Both for the G2P and the morphology data, we hold monotonicity, by and large, a legitimate assumption so that our approach would appear justified. As to the morphology data sets, we illustrate in Table 7 a few string pair relationships that they contain, as indicated by (Dreyer et al., 2008) .",
"cite_spans": [
{
"start": 148,
"end": 176,
"text": "(van den Bosch et al., 2006)",
"ref_id": "BIBREF31"
},
{
"start": 277,
"end": 297,
"text": "(Dreyer et al., 2008",
"ref_id": "BIBREF11"
},
{
"start": 672,
"end": 693,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 597,
"end": 604,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "We generate alignments for our data sets using Algorithms 1 and 2 and, as a comparison, we implement an exhaustive search bigram scoring model as indicated in Section 4 in an EM-like fashion similar as in Algorithm 2, employing the CMU SLM toolkit (Clarkson and Rosenfeld, 1997) with Witten-Bell smoothing as n-gram model. For Algorithm 1, which we also refer to as unigram model in the following, we choose steps S as shown in Table E -Celex {(1, 1), (2, 1), (3, 1), (4, 1), (1, 2)} rP {(0, 2), (1, 1), (1, 2), (2, 1), (2, 2)} 2PKE {(0, 2), (1, 1), (2, 1), (2, 2)} 13SIA",
"cite_spans": [
{
"start": 248,
"end": 278,
"text": "(Clarkson and Rosenfeld, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 428,
"end": 436,
"text": "Table E",
"ref_id": null
}
],
"eq_spans": [],
"section": "Alignments",
"sec_num": "7.1"
},
{
"text": "{(1, 1), (1, 2), (2, 1), (2, 2)} 2PIE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignments",
"sec_num": "7.1"
},
{
"text": "{(1, 1), (1, 2)} Table 3 : Data set and choice of S. For all three G2P data sets, we select the same S, exemplarily shown for E-Celex. The choice of S for rP and 2PKE is taken from (Eger, 2012b) . For 13SIA and 2PIE we use comprehensive \u2126's with largest step (2, 2) but the algorithm ends up using just the outlined set of steps. 3. As similarity measure sim, we use log prob with Good-Turing smoothing and for q we likewise use log prob; we outline the choice of L below. Initially, we set \u03b3 and \u03c7 to zero.",
"cite_spans": [
{
"start": 181,
"end": 194,
"text": "(Eger, 2012b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 17,
"end": 24,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Alignments",
"sec_num": "7.1"
},
{
"text": "As an alignment quality measure we consider conditional entropy H(L | P ) (or H(P | L)) as suggested by (Pervouchine et al., 2009) . Conditional entropy measures the average uncertainty of a (grapheme) substring L given a (phoneme) substring P ; apparently, the smaller H(L | P ) the better the alignment because it produces more consistent associations.",
"cite_spans": [
{
"start": 104,
"end": 130,
"text": "(Pervouchine et al., 2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignments",
"sec_num": "7.1"
},
{
"text": "In the following, all results are averages over several runs, 5 in the case of the unigram model and 2 in the case of the bigram model. Both for the bigram model and the unigram model, we select K, where K \u2208 {50, 100, 300, 500}, training samples randomly in each EM iteration for alignment and from which to update probability estimates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignments",
"sec_num": "7.1"
},
{
"text": "In Figure 2 , we show learning curves over EM iterations in the case of the unigram and bigram models, and over training set sizes. We see that performance, as measured by conditional entropy, increases over iterations both for the bigram model and the unigram model (in Figure 2 ), but apparently alignment quality decreases again when too large training set sizes K are considered in the case of the bigram model (omitted for space reasons); similar outcomes have been observed when similarity measures other than log prob are employed in Algorithm 1 for the unigram model, e.g. the \u03c7 2 similarity measure (cf. (Eger, 2012b) ). To explain this, we hypothesize that the bigram model (and likewise for specific similarity measures) is more susceptible to overfitting when it is trained on too large training sets so that it is more reluctant to escape 'non-optimal' local minima. We also see that, apparently, the unigram model performs frequently better than the bigram model.",
"cite_spans": [
{
"start": 613,
"end": 626,
"text": "(Eger, 2012b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 271,
"end": 279,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Alignments",
"sec_num": "7.1"
},
{
"text": "The latter results may be partly misleading, however. Conditional entropy, the way (Pervouchine et al., 2009) have specified it, is a 'unigram' assessment model itself and may therefore be incapable of accounting for certain contextual phenomena. For example, in the 2PKE and rP data, we find alignment possibilities of the following types, -g e b t g e -b t ge g e b en g e ge b en where we list the linguistically 'correct', due to the prefixal character of ge in German, alignment on the left and the 'incorrect' alignment on the right. By its specification, Algorithm 1 must assign both these alignments the same score and can hence not distinguish between them; the same holds true for the conditional entropy measure. To address this issue, we evaluate alignments by a second method as follows. From the aligned data, we extract a random sample of size 1000 and train an n-gram graphone model (that can account for 'positional associations') on the residual, assessing its perplexity on the held-out set of size 1000. Results are shown in Table 4 . We see that, in agreement with our visual impression at least for the morphology data, the alignments produced by the bigram model seem to be slightly more consistent in that they reduce perplexity of the n-gram graphone model, whereas conditional entropy proclaims the opposite ranking. ",
"cite_spans": [
{
"start": 83,
"end": 109,
"text": "(Pervouchine et al., 2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 1045,
"end": 1052,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Alignments",
"sec_num": "7.1"
},
{
"text": "In Table 5 we report results when experimenting with the coefficient \u03b3 of the quality of steps measure q. Overall, we do not find that increasing \u03b3 would generally lead to a performance increase, as measured by e.g. H(L | P ). On the contrary, when choosing as set of steps a comprehensive \u2126 as in Table 5 , where we choose \u2126 = {(a, b) | a \u2264 4, b \u2264 4}\\{(0, 0)}, for \u03b3 = 0, we find values of 0.278, 0.546, 0.662 for H(L | P ) for G-Celex, F-Brulex and E-Celex, respectively, while corresponding values for \u03b3 = 10 are 0.351, 0.833, 1.401. Contrarily, H(P | L), the putatively more indicative measure for transduction from x to y, has 0.499, 0.417, 0.598 for \u03b3 = 0 and 0.378, 0.401, 1.113 for \u03b3 = 10, so that, except for the E-Celex data, \u03b3 = 10 apparently leads to improved H(P | L) values in this situation, while \u03b3 = 0 seems to lead to better H(L | P ) values.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": null
},
{
"start": 298,
"end": 305,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quality q of steps",
"sec_num": "7.1.1"
},
{
"text": "In any case, from a model complexity perspective, 6 increasing \u03b3 may certainly be beneficial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality q of steps",
"sec_num": "7.1.1"
},
{
"text": "For example, Table 5 shows that with \u03b3 = 0, Algorithm 1 will select up to 15 different steps for the given choice \u2126, most of which seem linguistically questionable. On the contrary, with a large \u03b3, Algorithm 1 employs only four resp. five different steps for the G2P data; most importantly, among these are (1, 1), (2, 1) and (3, 1), all of which are in accordance with linguistic reasoning as e.g. outlined in Section 5. Thus, we can think of q as a 'regularization term' that prevents the algorithm from 'overfitting' the data.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quality q of steps",
"sec_num": "7.1.1"
},
{
"text": "(1, 1) (2, 1) (3, 1) (4, 1) (1, 2) (1, 0) (2, 3) (3, 2) (3, 3) (4, 2) (4, 3) (4, 4) (2, 2) (0, 1) (1, 3) Table 5 : Steps and their frequency masses in percent for different data sets for \u03b3 = 10 (top rows) and \u03b3 = 0 (bottom rows), averaged over two runs. We include only steps whose average occurrence exceeds 10.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 112,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quality q of steps",
"sec_num": "7.1.1"
},
{
"text": "We briefly discuss here a possibility to detect latent classes via the concept of colored paths. Assume that a corpus of colored alignments is available and let each color be represented by the contexts (graphones to the left and right) of its members; moreover, define the 'likelihood' L that the pair p x,y := (x \u03b1i \u03b1i\u22121+1 , y \u03b2i \u03b2i\u22121+1 ) is of color c as the (document) similarity (in an information retrieval sense) of p x,y 's contexts with color c, which we can e.g. implement via the cosine similarity of the context vectors associated with p x,y and c. For number of colors C = 2, we then find, under this specification, the following kinds of alignments when running Algorithms 1 and 2 with \u03b3 = 0 and \u03c7 = 1, a nn u al & n jU l ph o n e me f @U n i m where we use bold font to distinguish the two color classes, and use original E-Celex notation for phonemic characters. It is clear that the algorithm has detected some kind of consonant/vowel distinction on a phonemic level here. We find similar kinds of latent classes for the other G2P data sets, and for the morphology data, the algorithm learns (less interestingly) to detect word endings and starts, under this specification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Colors",
"sec_num": "7.1.2"
},
{
"text": "We report results of experiments on transducing x strings to y strings for the G2P data and the morphology data sets. We exclude E-Celex because training the CRF with our parametrizations (e.g. all features in window size of four) did regularly not terminate, due to the large size of the data set (> 60,000 string pairs). Likewise for computing resources reasons, 7 we do not use ten-fold cross-validation but, as in (Jiampojamarn et al., 2008) , train on the first 9 folds given by the Pascal challenge, testing on the last. Moreover, for the G2P data, we use an -scattering model with steps S = {(1, 0), (1, 1)} as a predictor model from which to infer the number of partsk for decoding and then apply Algorithm 3. 8 For alignments, we use in all cases Algorithms 1 and 2 with \u03b3 = 0 and \u03c7 = 0. As reference for the G2P data, we give word accuracy rates as announced by (Bisani and Table 6 : Data sets and word accuracy rates in percent. DSE-F: (Dreyer et al., 2008) using 'pure'",
"cite_spans": [
{
"start": 418,
"end": 445,
"text": "(Jiampojamarn et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 872,
"end": 883,
"text": "(Bisani and",
"ref_id": null
},
{
"start": 947,
"end": 968,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 884,
"end": 891,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transductions",
"sec_num": "7.2"
},
{
"text": "alignments and features. DSE-FL: (Dreyer et al., 2008) using alignments, features and latent classes. Mos3, Mos15: Moses system with window sizes of 3 and 15, resp., as reported by (Dreyer et al., 2008) . M-M+HMM: Many-to-many aligner with HMM and instance-based segmenter for decoding as reported by (Jiampojamarn et al., 2007) . BN: (Bisani and Ney, 2008) using a machine translation motivated approach to many-to-many alignments. MeR+A * : Results of Moses system on G2P data as reported by (Rama et al., 2009) . CRF-3 Our approach with window size of 3 and 3-gram scoring model (see Algorithm 3). CRF-4: Our approach with window size of 4 and 3-gram scoring model. CRF-4 * : Our approach with window size of 4 and 4-gram scoring model and 2-best lists (i.e. in Algorithm 3, obtain\u01771 and\u01772 as the two most probable transductions of s). In bold: Best results (no statistical tests). Underlined: best results using 'pure' alignments.",
"cite_spans": [
{
"start": 33,
"end": 54,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 181,
"end": 202,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 301,
"end": 328,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 335,
"end": 357,
"text": "(Bisani and Ney, 2008)",
"ref_id": "BIBREF3"
},
{
"start": 494,
"end": 513,
"text": "(Rama et al., 2009)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transductions",
"sec_num": "7.2"
},
{
"text": "2008), (Jiampojamarn et al., 2007) , and (Rama et al., 2009) , who gives the Moses 'baseline' (Koehn et al., 2007) .",
"cite_spans": [
{
"start": 7,
"end": 34,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 41,
"end": 60,
"text": "(Rama et al., 2009)",
"ref_id": "BIBREF26"
},
{
"start": 94,
"end": 114,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transductions",
"sec_num": "7.2"
},
{
"text": "For the morphology data we use exactly the same training/test data splits as in (Dreyer et al., 2008) . Moreover, because (Dreyer et al., 2008) report all results in terms of window sizes of 3, we do likewise for this data. For decoding we do not use a (complex) predictor model here but rely on simple statistics; e.g. we find that for the class 13SIA, k is always in {m \u2212 2, m \u2212 1, m}, where m is the length of x, so we apply Algorithm 3 three times and select the best scoring\u0177 string. To avoid zeros in the decoding process (see discussion in Section 6.2), we replace the (0, 2) steps used in the rP and 2PKE data sets by a step (1, 3).",
"cite_spans": [
{
"start": 80,
"end": 101,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 122,
"end": 143,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transductions",
"sec_num": "7.2"
},
{
"text": "Results are shown in Table 6 . For the G2P data, our approach always outperforms the best reported results for pipeline approaches (see below), while we are significantly below the results reported by (Dreyer et al., 2008) for the morphology data in two out of four cases. Contrarily, when 'pure' alignments are taken into consideration - (Dreyer et al., 2008) learn very complex latent classes with which to enrich alignments -our results are clearly better throughout. In almost all cases, we significantly beat the Moses 'baseline'.",
"cite_spans": [
{
"start": 201,
"end": 222,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 339,
"end": 360,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transductions",
"sec_num": "7.2"
},
{
"text": "We believe our alignment procedure to be superior to the one presented in (Jiampojamarn et al., 2007) (and likewise for the 'machine translation motivated' approach outlined by (Bisani and Ney, 2008) ) from a number of perspectives. First, it is more flexible and general in that it allows the specification of arbitrary non-negative steps S and arbitrary similarity measures sim. Moreover, as we have shown, our approach can very easily and conveniently be adapted to incorporate step quality measures, which may turn out to be very useful in detecting the 'right' choice of S (i.e. as a 'regularization term'); and our approach can also easily be generalized to incorporate the modeling of latent classes as e.g. done in (Dreyer et al., 2008) , within a polynomial running time framework; further generalizations such as semi-ring specifications (Mohri, 2002) are obvious but not discussed in the current work (cf. (Eger, 2012b) ). Secondly, our algorithm appears very simple and intuitive, while being computationally equivalently tractable and making the same sorts of independence assumptions as in (Jiampojamarn et al., 2007) .",
"cite_spans": [
{
"start": 74,
"end": 101,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 177,
"end": 199,
"text": "(Bisani and Ney, 2008)",
"ref_id": "BIBREF3"
},
{
"start": 723,
"end": 744,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 848,
"end": 861,
"text": "(Mohri, 2002)",
"ref_id": "BIBREF23"
},
{
"start": 917,
"end": 930,
"text": "(Eger, 2012b)",
"ref_id": "BIBREF13"
},
{
"start": 1104,
"end": 1131,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "As regards decoding, (Jiampojamarn et al., 2007) use a grapheme segmentation module where each grapheme letter can form a chunk with its neighbor or stand alone, a decision that is based on local context and instance-based learning. We hold this approach to be insufficient because (besides the obvious drawback that, as we have shown, larger chunks than two seem appropriate for G2P) it unecessarily restricts the search space for grapheme segmentation; once a decision is made to join two letters, it cannot be reversed and alternative segmentations are not considered. The same holds true for the phrasal decoder approach outlined in (Jiampojamarn et al., 2008) , the critique of which is already uttered in (Dreyer et al., 2008) , namely, that the input string is segmented into substrings which are transduced independently of each other, ignoring context. Contrarily, for decoding x, we compute all possible segmentations of x and score them (in conjunction with the transduced\u0177 strings) with higher order n-gram models, which is clearly superior to the named approaches because it takes both context into account and does not restrict search space. Moreover, given an adequate predictor model, we found that enumerating all possible restricted integer compositions is so fast that no further investigation of restricting search space is necessary. 9",
"cite_spans": [
{
"start": 21,
"end": 48,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 637,
"end": 664,
"text": "(Jiampojamarn et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 711,
"end": 732,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "While we thus believe our individual components for alignment and decoding to be superior to the mentioned approaches, our modeling of string transductions adheres to a pipeline approach -in (Jiampojamarn et al., 2008) 's words -that, as they suggest, is inferior to a unified framework, as they present it. All our components can be integrated within such a framework, which is scope for future research. In contrast with (Dreyer et al., 2008) , we believe our alignments (per se) to be more adequate (they use one-to-one and one-to-zero alignments), which the performance measures corroborate, while their idea to enrich alignments with a multitude of latent classes (more complex than representable in our framework) obviously outperforms our method on certain data sets such as those encountered in morphology, where e.g. latent word classes may be of great importance.",
"cite_spans": [
{
"start": 191,
"end": 218,
"text": "(Jiampojamarn et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 423,
"end": 444,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "We have presented a simple and general framework for generating monotone many-tomany alignments that competes with (Jiampojamarn et al., 2007) 's alignment procedure. Moreover, we have discussed crucial independence assumptions and, thus, limitations of this algorithm and shown that exhaustive enumeration (among other methods) can overcome these problems -in particular, due to the relatively small search space -in the field of monotone alignments. Additionally, we have discussed problems of standard alignment quality measures such as conditional entropy and have suggested an alternative decoding procedure for string transduction within the monotone many-to-many alignment framework that addresses the limitations of the procedures suggested by (Jiampojamarn et al., 2007) and (Jiampojamarn et al., 2008) . In future work, we intend to explore more extensively, in particular, the effects of appropriate step restriction and regularization upon alignment quality.",
"cite_spans": [
{
"start": 115,
"end": 142,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 752,
"end": 779,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 784,
"end": 811,
"text": "(Jiampojamarn et al., 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
},
{
"text": "The many-to-many alignment algorithm designed in(Jiampojamarn et al., 2007) is an extension of a one-to-one stochastic transducer devised in(Ristad and Yianilos, 1998). Moreover,(Brill and Moore, 2000) learn the weighted edit distance between string pairs where edit operations may encompass arbitrary subsequences of strings, a setting also closely related to our problem of monotone many-to-many alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "But also note the dependence of the running time on the definition of sim, q and L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The variant of EM that we describe is sometimes called hard EM while e.g.(Jiampojamarn et al., 2007) present a soft EM version; but see the discussion in(Samdani et al., 2012).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the case of upper bounds,(Malandro, 2012) provides asymptotics for the number of restricted integer compositions, which are beyond the scope of the present work, however.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As an exception might be considered e.g. extra terminal vowel sounds like in Italian sport, pronounced as s p o r t @. As pointed out by a reviewer, other such exceptions might include short vowels in Arabic or Hebrew script that are generally not graphemically represented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Taking into model complexity is e.g. in accordance with Occam's razor or Akaike's information criterion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "E.g. a single run of the CRF on the G-Celex data takes longer than 24 hours on a standard PC.8 We train the -scattering model on data where all multi-character phonemes such as ks are merged to a single character, as obtained from the alignments as given by Algorithms 1 and 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used the algorithm presented in(Updyke, 2010).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A new look at the statistical model identification",
"authors": [
{
"first": "H",
"middle": [],
"last": "Akaike",
"suffix": ""
}
],
"year": 1974,
"venue": "IEEE Transactions on Automatic Control",
"volume": "19",
"issue": "6",
"pages": "716--723",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6):716-723.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The CELEX2 lexical database. Linguistic Data Consortium",
"authors": [
{
"first": "H",
"middle": [],
"last": "Baayen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Piepenbrock",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Gulikers",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baayen, H., Piepenbrock, R., and Gulikers, L. (1996). The CELEX2 lexical database. Linguistic Data Consortium, Philadelphia.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automated Japanese grapheme-phoneme alignment",
"authors": [
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Tanaka",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the International Conference on Cognitive Science",
"volume": "",
"issue": "",
"pages": "349--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baldwin, T. and Tanaka, H. (1999). Automated Japanese grapheme-phoneme alignment. In Proc. of the International Conference on Cognitive Science, pages 349-354.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Joint-sequence models for grapheme-to-phoneme conversion",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bisani",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2008,
"venue": "Speech Communication",
"volume": "50",
"issue": "5",
"pages": "434--451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bisani, M. and Ney, H. (2008). Joint-sequence models for grapheme-to-phoneme conversion. Speech Communication, 50(5):434-451.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Issues in building general letter to sound rules",
"authors": [
{
"first": "A",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lenzo",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Pagel",
"suffix": ""
}
],
"year": 1998,
"venue": "The Third ESCA/COCOSDA Workshop (ETRW) on Speech Synthesis. ISCA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Black, A., Lenzo, K., and Pagel, V. (1998). Issues in building general letter to sound rules. In The Third ESCA/COCOSDA Workshop (ETRW) on Speech Synthesis. ISCA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An improved error model for noisy channel spelling correction",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "286--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brill, E. and Moore, R. (2000). An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 286-293. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "Pietra",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "S",
"middle": [
"D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"D"
],
"last": "Jelinek",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Roossin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "2",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P., Cocke, J., , Pietra, S. D., Pietra, V. D., Jelinek, F., Lafferty, J., Mercer, R., and Roossin, P. (1990). A statistical approach to machine translation. Computational Linguistics, 16(2):79-85.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Statistical language modeling using the CMU-Cambridge toolkit",
"authors": [
{
"first": "P",
"middle": [],
"last": "Clarkson",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings ESCA Eurospeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clarkson, P. and Rosenfeld, R. (1997). Statistical language modeling using the CMU- Cambridge toolkit. In Proceedings ESCA Eurospeech.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Advanced Combinatorics. D",
"authors": [
{
"first": "L",
"middle": [],
"last": "Comtet",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Comtet, L. (1974). Advanced Combinatorics. D. Reidel Publishing Company.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Une base de donn\u00e9es lexicales informatis\u00e9e pour le francais \u00e9crit et parl\u00e9",
"authors": [
{
"first": "A",
"middle": [],
"last": "Content",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mousty",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Radeau",
"suffix": ""
}
],
"year": 1990,
"venue": "L'Ann\u00e9e Psychologique",
"volume": "",
"issue": "",
"pages": "551--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Content, A., Mousty, P., and Radeau, M. (1990). Une base de donn\u00e9es lexicales informatis\u00e9e pour le francais \u00e9crit et parl\u00e9. In L'Ann\u00e9e Psychologique, pages 551-566.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "Rubin",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society. Series B (Methodological)",
"volume": "39",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dempster, A., Laird, N., and Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1-38.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Latent-variable modeling of string transductions with finite-state methods",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1080--1089",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dreyer, M., Smith, J., and Eisner, J. (2008). Latent-variable modeling of string transduc- tions with finite-state methods. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1080-1089, Honolulu, Hawaii.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "On S-restricted f -weighted integer compositions and extended binomial coefficients",
"authors": [
{
"first": "S",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eger, S. (2012a). On S-restricted f -weighted integer compositions and extended binomial coefficients. Submitted.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sequence alignment with arbitrary steps and further generalizations, with applications to alignments in linguistics",
"authors": [
{
"first": "S",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eger, S. (2012b). Sequence alignment with arbitrary steps and further generalizations, with applications to alignments in linguistics. Submitted.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bi-directional conversion between graphemes and phonemes using a joint n-gram model",
"authors": [
{
"first": "L",
"middle": [],
"last": "Galescu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. 4th ISCA Tutorial and Research Workshop on Speech Synthesis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Galescu, L. and Allen, J. (2001). Bi-directional conversion between graphemes and phonemes using a joint n-gram model. In Proc. 4th ISCA Tutorial and Research Workshop on Speech Synthesis, Perthshire, Scotland.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Compositions of n with parts in a set",
"authors": [
{
"first": "S",
"middle": [],
"last": "Heubach",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mansour",
"suffix": ""
}
],
"year": 2004,
"venue": "Congressus Numerantium",
"volume": "164",
"issue": "",
"pages": "127--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heubach, S. and Mansour, T. (2004). Compositions of n with parts in a set. Congressus Numerantium, 164:127-143.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A re-examination of lexical association measures",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "M.-Y",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2009,
"venue": "MWE '09 Proceedings of the Workshop on Multiword Expressions: Identification, Interpretation, Disambiguation and Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoang, H., Kim, S., and Kan, M.-Y. (2009). A re-examination of lexical association mea- sures. In MWE '09 Proceedings of the Workshop on Multiword Expressions: Identification, Interpretation, Disambiguation and Applications.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Joint processing and discriminative training for letter-to-phoneme conversion",
"authors": [
{
"first": "S",
"middle": [],
"last": "Jiampojamarn",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2008,
"venue": "46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-08: HLT)",
"volume": "",
"issue": "",
"pages": "905--913",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiampojamarn, S., Cherry, C., and Kondrak, G. (2008). Joint processing and discriminative training for letter-to-phoneme conversion. In 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-08: HLT), pages 905-913.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Letter-phoneme alignment: An exploration",
"authors": [
{
"first": "S",
"middle": [],
"last": "Jiampojamarn",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2010,
"venue": "The 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010)",
"volume": "",
"issue": "",
"pages": "780--788",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiampojamarn, S. and Kondrak, G. (2010). Letter-phoneme alignment: An exploration. In The 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), pages 780-788.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Applying many-to-many alignments and Hidden Markov models to letter-to-phoneme conversion",
"authors": [
{
"first": "S",
"middle": [],
"last": "Jiampojamarn",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kondrak",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sherif",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2007)",
"volume": "",
"issue": "",
"pages": "372--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiampojamarn, S., Kondrak, G., and Sherif, T. (2007). Applying many-to-many alignments and Hidden Markov models to letter-to-phoneme conversion. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2007), pages 372-379, Rochester, NY.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. (2007). Moses: Open source toolkit for statistical machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), Prague, Czech Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. 18th International Conf. on Machine Learning",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafferty, J., McCallum, A., and Pereira, F. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. 18th International Conf. on Machine Learning, pages 282-289.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Asymptotics for restricted integer compositions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Malandro",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malandro, M. (2012). Asymptotics for restricted integer compositions. Preprint available at http://arxiv.org/pdf/1108.0337v1.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Semiring frameworks and algorithms for shortest-distance problems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Automata, Languages and Combinatorics",
"volume": "7",
"issue": "3",
"pages": "321--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohri, M. (2002). Semiring frameworks and algorithms for shortest-distance problems. Journal of Automata, Languages and Combinatorics, 7(3):321-350.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A general method applicable to the search for similarities in the amino acid sequence of two proteins",
"authors": [
{
"first": "S",
"middle": [],
"last": "Needleman",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Wunsch",
"suffix": ""
}
],
"year": 1970,
"venue": "Journal of Molecular Biology",
"volume": "48",
"issue": "",
"pages": "443--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Needleman, S. and Wunsch, C. (1970). A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of Molecular Biology, 48:443-453.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Transliteration alignment",
"authors": [
{
"first": "V",
"middle": [],
"last": "Pervouchine",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "136--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pervouchine, V., Li, H., and Lin, B. (2009). Transliteration alignment. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 136-144.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Modeling letter to phoneme conversion as a phrase based statistical machine translation problem with minimum error rate training",
"authors": [
{
"first": "T",
"middle": [],
"last": "Rama",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kolachina",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL HLT 2009 Student Research Workshop",
"volume": "",
"issue": "",
"pages": "90--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rama, T., Kumar, A., and Kolachina, S. (2009). Modeling letter to phoneme conversion as a phrase based statistical machine translation problem with minimum error rate training. In NAACL HLT 2009 Student Research Workshop, pages 90-95, Colorada, USA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning string-edit distance",
"authors": [
{
"first": "E",
"middle": [],
"last": "Ristad",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Yianilos",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "20",
"issue": "5",
"pages": "522--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ristad, E. and Yianilos, P. (1998). Learning string-edit distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(5):522-533.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Unified expectation maximization",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samdani",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Roth",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "688--698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samdani, R., Chang, M.-W., and Roth, D. (2012). Unified expectation maximization. In HLT-NAACL, pages 688-698.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Hidden Markov models for grapheme to phoneme conversion",
"authors": [
{
"first": "P",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 9th European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taylor, P. (2005). Hidden Markov models for grapheme to phoneme conversion. In Proceedings of the 9th European Conference on Speech Communication and Technology 2005.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A unified approach to algorithms generating unrestricted and restricted integer compositions and integer partitions",
"authors": [
{
"first": "J",
"middle": [],
"last": "Updyke",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Mathematical Modelling and Algorithms",
"volume": "9",
"issue": "1",
"pages": "53--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Updyke, J. (2010). A unified approach to algorithms generating unrestricted and restricted integer compositions and integer partitions. Journal of Mathematical Modelling and Algorithms, 9(1):53-97.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Pascal letter-to-phoneme conversion challenge",
"authors": [
{
"first": "A",
"middle": [],
"last": "Van Den Bosch",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Damper",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Gustafson",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Marchand",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "van den Bosch, A., Chen, S., Daelemans, W., Damper, R., Gustafson, K., Marchand, Y., and Yvon, F. (2006). Pascal letter-to-phoneme conversion challenge. http://www. pascalnetwork.org/Challenges/PRONALSYL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Towards a theory of word length distribution",
"authors": [
{
"first": "G",
"middle": [],
"last": "Wimmer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "K\u00f6hler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grotjahn",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Altmann",
"suffix": ""
}
],
"year": 1994,
"venue": "Journal of Quantitative Linguistics",
"volume": "1",
"issue": "",
"pages": "98--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wimmer, G., K\u00f6hler, R., Grotjahn, R., and Altmann, G. (1994). Towards a theory of word length distribution. Journal of Quantitative Linguistics, 1:98-106.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "An introduction to model selection",
"authors": [
{
"first": "W",
"middle": [],
"last": "Zucchini",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Mathematical Psychology",
"volume": "44",
"issue": "",
"pages": "41--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zucchini, W. (2000). An introduction to model selection. Journal of Mathematical Psychology, 44:41-61.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"text": "Decoding 1: procedure decode(x = x1 . . . xm;k, \u03b1, \u03b2, tr) s \u2208 (m,k, \u03b1, \u03b2) do (m,k, \u03b1, \u03b2) : the set of all integer compositions of m withk parts, each between \u03b1 and \u03b2 4:\u0177 \u2190 tr(s) 5: z\u0177 \u2190 ngramScore(x,\u0177) x of length m with k parts uniquely corresponds to an integer composition (a way of writing m as a sum of non-negative integers) of the integer m with k parts, as in,",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Learning curves over iterations for F-Brulex data, K = 50 and K = 300, for unigram and bigram models.",
"num": null,
"type_str": "figure"
},
"TABREF2": {
"text": "Avg. grapheme and phoneme string lengths in resp. data set, and probabilities that lengths exceed 15.",
"content": "<table><tr><td>Dataset</td><td>\u00b5G</td><td colspan=\"3\">\u00b5P P [G&gt;15] P [P &gt;15]</td></tr><tr><td>German-Celex French-Brulex English-Celex</td><td colspan=\"2\">9.98 8.67 8.49 6.71 8.21 7.39</td><td>4.80% 1.36% 1.03%</td><td>1.62% 0.15% 0.40%</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}