ACL-OCL / Base_JSON /prefixR /json /R13 /R13-1038.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R13-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:56:02.786385Z"
},
"title": "Realization of Common Statistical Methods in Computational Linguistics with Functional Automata",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Gerdjikov",
"suffix": "",
"affiliation": {},
"email": "stgerdjikov@abv.bg"
},
{
"first": "Petar",
"middle": [],
"last": "Mitankin",
"suffix": "",
"affiliation": {},
"email": "pmitankin@fmi.uni-sofia.bg"
},
{
"first": "Vladislav",
"middle": [],
"last": "Nenchev",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we present the functional automata as a general framework for representation, training and exploring of various statistical models as LLM's, HMM's, CRF's, etc. Our contribution is a new construction that allows the representation of the derivatives of a function given by a functional automaton. It preserves the natural representation of the functions and the standard product and sum operations of real numbers. In the same time it requires no additional overhead for the standard dynamic programming techniques that yield the computation of a functional value.",
"pdf_parse": {
"paper_id": "R13-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we present the functional automata as a general framework for representation, training and exploring of various statistical models as LLM's, HMM's, CRF's, etc. Our contribution is a new construction that allows the representation of the derivatives of a function given by a functional automaton. It preserves the natural representation of the functions and the standard product and sum operations of real numbers. In the same time it requires no additional overhead for the standard dynamic programming techniques that yield the computation of a functional value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Statistical models such as n-gram language models (Chen and Goodman, 1996) , hidden Markov models (Rabiner, 1989) , conditional random fields (Lafferty et al., 2001 ), log-linear models (Darroch and Ratcliff, 1972) are widely applied in the natural language processing in order to approach various problems, e.g. parsing (Sha and Pereira, 2003) , speech recognition (Juang and Rabiner, 1991) , statistical machine translation (Brown et al., 1993) . Different statistical models perform differently on different tasks. Thus in order to find the best practical solution one might need to try several approaches before getting the desired effect. Disposing on a general framework that allows the flexibility to change the statistical model or/and training scheme would spend much efforts and time.",
"cite_spans": [
{
"start": 50,
"end": 74,
"text": "(Chen and Goodman, 1996)",
"ref_id": "BIBREF1"
},
{
"start": 98,
"end": 113,
"text": "(Rabiner, 1989)",
"ref_id": "BIBREF17"
},
{
"start": 142,
"end": 164,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF10"
},
{
"start": 186,
"end": 214,
"text": "(Darroch and Ratcliff, 1972)",
"ref_id": "BIBREF2"
},
{
"start": 321,
"end": 344,
"text": "(Sha and Pereira, 2003)",
"ref_id": "BIBREF20"
},
{
"start": 366,
"end": 391,
"text": "(Juang and Rabiner, 1991)",
"ref_id": "BIBREF8"
},
{
"start": 426,
"end": 446,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Focusing on this pragmatical problem, we propose the functional automata as a possible solution. The basic idea is to consider the mathemati-cal expressions of sums and products arising in the statistical models as regular expressions. Thus regarding the functions in these expressions as individual characters, the sums as unions and the products as concatenation, we get the desired correspondence. The relation between a particular statistical model and a functional automaton for its representation is then rather straightforward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The training of the statistical models is in a way more involved. Most of the approaches require a gradient method that estimates the best model parameters. To this end one needs to have an efficient representation not only of the function used by the model but also of its (partial) derivatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To solve similar problem Eisner and Li introduce first-order and second-order expectation semirings. In (Jason Eisner, 2002; Zhifei Li and Jason Eisner, 2009) it is shown how derivatives of functions arising in statistical models can be represented. This is achieved by the means of an algebraic construction that: (i) considers pairs of functions (first-order expectation semiring) and quadruples of functions (second-order expectation semiring); (ii) introduces an operation on pairs and quadruples, respectively, of functions that replaces the multiplication and is used to simulate the multiplication of first-and second-order derivatives, respectively. Thus the higher the order of the derivatives in interest, the more complex would be the necessary expectation semiring and the operations that it would require.",
"cite_spans": [
{
"start": 104,
"end": 124,
"text": "(Jason Eisner, 2002;",
"ref_id": "BIBREF3"
},
{
"start": 125,
"end": 158,
"text": "Zhifei Li and Jason Eisner, 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the current paper we propose an alternative approach. It is based on a combinatorial construction that allows preserving both: (i) manipulation with single functions and (ii) the usage of the standard multiplication and addition of real numbers. Thus we get a uniform representation of functions, their first-and higher order derivatives. Our approach requires the same storage as the approach in (Jason Eisner, 2002; Zhifei Li and Jason Eisner, 2009) and enables the same efficiency for the traversal procedures described in (Zhifei Li and Jason Eisner, 2009) .",
"cite_spans": [
{
"start": 400,
"end": 420,
"text": "(Jason Eisner, 2002;",
"ref_id": "BIBREF3"
},
{
"start": 421,
"end": 454,
"text": "Zhifei Li and Jason Eisner, 2009)",
"ref_id": "BIBREF11"
},
{
"start": 537,
"end": 563,
"text": "Li and Jason Eisner, 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 3 we show that the values of a function represented by an acyclic functional automaton can be efficiently computed by the means of a standard dynamic programming technique. We further describe how to construct functional automata for the partial derivatives of F by given functional automaton representing F . We show in Sections 2 and 6 that such automata can be used for training log-linear models, hidden Markov models and conditional random fields. We only require that the objective function is represented via functional automata. In Section 5 we present a construction of functional automaton for a loglinear model where one of the feature functions uses an n-gram language model (Chen and Goodman, 1996) .",
"cite_spans": [
{
"start": 698,
"end": 722,
"text": "(Chen and Goodman, 1996)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 7 we present evaluation of a developed system, based on functional automata, on the tasks of (i) noisy historical text normalization and (ii) OCR postcorrection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We consider the task of automatic normalization of Early Modern English texts. In the next two paragraphs we define some notions related to this task. We use them afterwards to formulate typical problems of training and search that can be effectively solved by functional automata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear models",
"sec_num": "2"
},
{
"text": "Given a source text s, say s = theldest sonn hath bin kild, and the goal is to find the most relevant modern English equivalent of s. A candidate generator is an algorithm that for a fixed source word or sequence of words, say s i s i+1 . . . s i+k , generates finite number of normalization candidates and supplies each normalization candidate, c, with a conditional probability, p cg (c | s i s i+1 . . . s i+k ). Hence we can assume that the candidate generator provides the information in the form of Table 1 . In this sense the candidate generator corresponds to the word-to-word or phrase-to-phrase translation tables in statistical machine translation systems (Koehn et al., 2003) . From the candidates we construct possible normalization targets: eldest sun hat been kid, the eldest soon has bean killed, the eldest son has been killed etc. For normalization of texts produced by OCR system from noisy historical documents the candidate generator could take into account both typical OCR errors and historical spelling variations, (Reffle, 2011) or can use directly automatically extracted spelling variations, for example (Gerdjikov et al., 2013) .",
"cite_spans": [
{
"start": 667,
"end": 687,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF9"
},
{
"start": 1039,
"end": 1053,
"text": "(Reffle, 2011)",
"ref_id": "BIBREF18"
},
{
"start": 1131,
"end": 1155,
"text": "(Gerdjikov et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 505,
"end": 512,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Log-linear models",
"sec_num": "2"
},
{
"text": "A normalization pair is a pair p = \u27e8w, c\u27e9 such that the sequence of target words c is a normalization candidate for the sequence of source words w. We call w left side and c right side of the normalization pair p. The left and the right sides of p are denoted l(p) and r(p) respectively. In our example some of the normalization pairs are \u27e8theldest, eldest\u27e9, \u27e8theldest, theeldest\u27e9, \u27e8kild, killed\u27e9, etc. A normalization alignment from s to t, denoted s \u2192 t, is a sequence of normalization pairs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear models",
"sec_num": "2"
},
{
"text": "p 1 p 2 . . . p k such that s = l(p 1 )l(p 2 ) . . . l(p k ) and t = r(p 1 )r(p 2 ) . . . r(p k ). The i-th normalization pair p i of the alignment s \u2192 t is denoted (s \u2192 t) i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear models",
"sec_num": "2"
},
{
"text": "The length k of the alignment is denoted |s \u2192 t|. Thus a possible normalization alignment in our example, from s = theldest sonn hath bin kild to t = eldest sun hat been kid is \u27e8theldest, eldest\u27e9 \u27e8sonn, sun\u27e9\u27e8hath, hat\u27e9\u27e8bin, been\u27e9\u27e8kild, kid\u27e9. We denote with A s the set of all normalization alignments from s. Note that A s is always finite, because the number of normalization candidates for each sequence s i s i+1 . . . s i+k of source words is finite.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear models",
"sec_num": "2"
},
{
"text": "Problem. Given a training corpus of normalization alignments train a log-linear model that combines the candidate generator with an n-gram statistical language model. Once the model is trained, find a best normalization alignment s \u2192 t for a given source s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear models",
"sec_num": "2"
},
{
"text": "Firstly, we consider the case where n = 1, i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear models",
"sec_num": "2"
},
{
"text": "we have a monogram language model which assigns a nonzero probability p lm (t i ) to each target word t i . The general case of arbitrary n-gram language model is postponed to Section 5. There are two feature functions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear models",
"sec_num": "2"
},
{
"text": "h lm (s \u2192 t) = log \u220f |t| i=1 p lm (t i ) and h cg (s \u2192 t) = log \u220f |s\u2192t| i=1 p cg [r((s \u2192 t) i ) | l((s \u2192 t) i )]. The probability of a normalization alignment s \u2192 t given s is p \u03bb (s \u2192 t | s) = exp[\u03bb lm h lm (s \u2192 t) + \u03bb cg h cg (s \u2192 t)] \u2211 s\u2192t \u2032 \u2208As exp[\u03bb lm h lm (s \u2192 t \u2032 ) + \u03bb cg h cg (s \u2192 t \u2032 )] ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear models",
"sec_num": "2"
},
{
"text": "where \u03bb = \u27e8\u03bb lm , \u03bb cg \u27e9 are the parameters of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-linear models",
"sec_num": "2"
},
{
"text": "Assume that we have a training corpus T of N normalization alignments,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "T = \u27e8s (1) \u2192 t (1) , s (2) \u2192 t (2) , . . . , s (N ) \u2192 t (N ) \u27e9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "The training task is to find parameters\u03bb that optimize the joint probability over the training corpus,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "\u03bb = argmax \u03bb \u220f N n=1 p \u03bb (s (n) \u2192 t (n) | s (n) ). Search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "Once the parameters\u03bb are fixed, the problem is to find a best normalization alignment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "s \u2192 t = argmax s\u2192t \u2032 \u2208As p\u03bb(s \u2192 t \u2032 ) for a given input s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "Introducing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e s\u2192t (\u03bb) = exp[\u03bb lm h lm (s \u2192 t) + \u03bb cg h cg (s \u2192 t)] and Z s (\u03bb) = \u2211 s\u2192t \u2032 \u2208As e s\u2192t \u2032 (\u03bb),",
"eq_num": "(1)"
}
],
"section": "Training.",
"sec_num": null
},
{
"text": "we obtain\u03bb = argmax \u03bb L(\u03bb), where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "L(\u03bb) = \u2211 N n=1 [\u03bb lm h lm (s (n) \u2192 t (n) )+ \u03bb cg h cg (s (n) \u2192 t (n) ) \u2212 log Z s (n) (\u03bb)]. (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "To optimize L(\u03bb) we use a gradient method that requires the computation of L(\u03bb), \u2202L \u2202\u03bbcg (\u03bb) and \u2202L \u2202\u03bb lm (\u03bb) by given \u03bb. For i = lm, cg we obtain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "\u2202L \u2202\u03bb i (\u03bb) = N \u2211 n=1 [h i (s (n) \u2192 t (n) ) \u2212 \u2202Z s (n) \u2202\u03bb i (\u03bb) Z s (n) (\u03bb) ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "(3) One possible choice of first order gradient method for the optimization of L is a variant of the conjugate gradient method that converges to the unique maximum of L for each starting point \u03bb 0 = \u27e8\u03bb lm0 , \u03bb cg 0 \u27e9, (Gilbert and Nocedal, 1992) .",
"cite_spans": [
{
"start": 218,
"end": 245,
"text": "(Gilbert and Nocedal, 1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training.",
"sec_num": null
},
{
"text": "The problem we faced in the previous Section is how to compute L(\u03bb) and \u2202L \u2202\u03bb i (\u03bb) at a given point \u03bb. The computation of the terms",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Functional automata",
"sec_num": "3"
},
{
"text": "\u03bb i h i (s (n) \u2192 t (n) ) for i = cg (or i = lm)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Functional automata",
"sec_num": "3"
},
{
"text": "is easy since it requires a single multiplication and |s (n) ",
"cite_spans": [
{
"start": 57,
"end": 60,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Functional automata",
"sec_num": "3"
},
{
"text": "\u2192 t (n) | (or |t (n) |) additions. However the \u03bb 2 1 0 1 2 3 exp(\u03bb 1 \u03bb 3 2 ) cos(\u03bb 2 ) sin(\u03bb 1 ) 1 \u03bb 2 1 +1 Figure 1: Functional automaton representing the function F (\u03bb 1 , \u03bb 2 ) = \u03bb 2 1 sin(\u03bb 1 ) 1 \u03bb 2 1 +1 + \u03bb 2 1 cos(\u03bb 2 ) 1 \u03bb 2 1 +1 + exp(\u03bb 1 \u03bb 3 2 ) sin(\u03bb 1 ) 1 \u03bb 2 1 +1 + exp(\u03bb 1 \u03bb 3 2 ) cos(\u03bb 2 ) 1 \u03bb 2 1 +1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Functional automata",
"sec_num": "3"
},
{
"text": "term Z s (\u03bb) may require much more efforts. It suffices that each source word s i generates two candidates for the expression in Equation 1 to explode in exponential number of summation terms. Computing the derivatives then becomes even harder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Functional automata",
"sec_num": "3"
},
{
"text": "In this Section we present a novel efficient solution to these problems. It is based on a compact representation of the mathematical expressions via functional automata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Functional automata",
"sec_num": "3"
},
{
"text": "Imagine, that we have the function F (\u03bb 1 , \u03bb 2 ) given as an expression:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Functional automata",
"sec_num": "3"
},
{
"text": "\u03bb 2 1 sin(\u03bb 1 ) 1 \u03bb 2 1 +1 + \u03bb 2 1 cos(\u03bb 2 ) 1 \u03bb 2 1 +1 + exp(\u03bb 1 \u03bb 3 2 ) sin(\u03bb 1 ) 1 \u03bb 2 1 +1 + exp(\u03bb 1 \u03bb 3 2 ) cos(\u03bb 2 ) 1 \u03bb 2 1 +1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Functional automata",
"sec_num": "3"
},
{
"text": ". Let us further assume that we interpret the individual functions \u03bb 2 1 , cos(\u03bb 2 ), 1 \u03bb 2 1 +1 , etc, as single symbols. If we further interpret the multiplication of functions as concatenation and the addition as union, then the expression for F (\u03bb 1 , \u03bb 2 ) given above can be viewed as a regular expression for which a finite state automaton can be compiled, see Figure 1 . This is the motivation for the following two definitions: Definition 3.1 Let d be a positive natural number.",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 376,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Functional automata",
"sec_num": "3"
},
{
"text": "Functional automaton is a quadruple A = \u27e8Q, q 0 , \u2206, T \u27e9, where Q is a finite set of states, q 0 \u2208 Q is a start state, \u2206 is a finite multiset of transitions of the form q W \u2212\u2192 p where p, q \u2208 Q are states and W : R d \u2192 R is a function and T \u2286 Q is a set of final states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Functional automata",
"sec_num": "3"
},
{
"text": "A = \u27e8Q, q 0 , \u2206, T \u27e9 be an acyclic functional automaton (AFA). A path \u03c0 from p 0 to p k in A is a sequence of k \u2265 0 tran- sitions \u03c0 = p 0 W 1 \u2212\u2192 p 1 W 2 \u2212\u2192 p 2 . . . p k\u22121 W k \u2212\u2192 p k . The label of \u03c0 is defined as l \u03c0 = \u220f k j=1 W j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3.2 Let",
"sec_num": null
},
{
"text": "If \u03c0 is empty (k = 0), then l \u03c0 = 1. A successful path is a path from q 0 to a final state q \u2208 T . The function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3.2 Let",
"sec_num": null
},
{
"text": "F A : R d \u2192 R represented by A is defined as F A = \u2211 \u03c0 is a successful path in A l \u03c0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3.2 Let",
"sec_num": null
},
{
"text": "Since A is acyclic, the number of successful paths is finite and F A is well defined. target word the eldest son soon sun probability 0.017 0.00002 0.0003 0.0005 0.0002 target word hat hats has bin probability 0.0001 0.00002 0.002 0.000005 target word been bean kid killed probability 0.003 0.000005 0.00002 0.0001 Table 2 : Target words and their language model probabilities.",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 322,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition 3.2 Let",
"sec_num": null
},
{
"text": "Classical constructions for union and concatenation of automata (Hopcroft and Ullman, 1979) can be adapted for functional automata. If A is the result of the union (concatenation) of A 1 and A 2 , then",
"cite_spans": [
{
"start": 64,
"end": 91,
"text": "(Hopcroft and Ullman, 1979)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3.2 Let",
"sec_num": null
},
{
"text": "F A = F A 1 + F A 2 (F A = F A 1 \u2022 F A 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3.2 Let",
"sec_num": null
},
{
"text": "represented by an AFA A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of a function F A",
"sec_num": "3.1"
},
{
"text": "In order to efficiently compute F A (\u03bb) for a given \u03bb = \u27e8\u03bb 1 , \u03bb 2 , . . . , \u03bb n \u27e9, we use standard dynamic programming. Without loss of generality we assume that A = \u27e8Q, q 0 , \u2206, T \u27e9 has only one final state and each transition in A belongs to some successful path. Firstly, we sort topologically the states of the automaton A in decreasing order. Let p 1 , p 2 , . . . , p |Q| be one such order of the states, i.e. (i) p 1 \u2208 T is the only one final state, (ii) p |Q| = q 0 is the start state and (iii) if there is a transition from p i to p j then j < i. For example for the automaton on Figure 1 we obtain 3, 2, 1, 0. Afterwards for each state p j we compute a value v j in the following way:",
"cite_spans": [],
"ref_spans": [
{
"start": 590,
"end": 598,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computation of a function F A",
"sec_num": "3.1"
},
{
"text": "v 1 = 1 and v j+1 = \u2211 p j+1 W (\u03bb) \u2212\u2192 p k W (\u03bb) \u2022 v k . Eventually F A (\u03bb) = v |Q| .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of a function F A",
"sec_num": "3.1"
},
{
"text": "If the computation of W (\u03bb) by given \u03bb takes time O(1) for all label functions W , then the time for the computation of F A (\u03bb) is O(|\u2206|). Now we focus on the problem how to compute Z s (\u03bb) at a given point \u03bb, see Equation 1. We illustrate how Z s (\u03bb) can be represented by an AFA, A s , on the example from Section 2, s = theldest sonn hath bin kild. Table 1 lists the sets of candidates in modern English for each source word s i . Table 2 presents the language model probabilities for each target word. Given this data we represent the possible normalization alignments via an acyclic two-tape automaton, see Figure 2 . This automaton can be considered as a string-to-weight transducer (Mohri, 1997) . On our example, for i \u2265 2 each such path consists of a single transition, because the candidates are single words. In order to represent the candidate the eldest we use the additional state 6. The transition from 0 to 6 corresponds to the first word the of the candidate and accumulates the whole probability p cg (the eldest | theldest) = 0.75. The transition from 6 to 1 corresponds to the second word eldest of the candidate. It should be clear that removing the target words from the transitions, we obtain the AFA A s representing Z s (\u03bb). For each alignment s (n) \u2192 t (n) from the training corpus we build a separate functional automaton, like the one on Figure 2 , representing Z s (n) (\u03bb). Thus we have N automata that we use to compute L(\u03bb) via Equation (2).",
"cite_spans": [
{
"start": 689,
"end": 702,
"text": "(Mohri, 1997)",
"ref_id": "BIBREF13"
},
{
"start": 1279,
"end": 1282,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 352,
"end": 359,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 434,
"end": 441,
"text": "Table 2",
"ref_id": null
},
{
"start": 612,
"end": 620,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 1366,
"end": 1374,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Computation of a function F A",
"sec_num": "3.1"
},
{
"text": "Our next goal is to compute the partial derivates \u2202L \u2202\u03bb i (\u03bb). Let us turn back to the function F (\u03bb 1 , \u03bb 2 ) represented by the automaton on Figure 1 . We show how to construct a functional automaton for",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "\u03bb 2 1 0 1 2 3 exp(\u03bb 1 \u03bb 3 2 ) cos(\u03bb 2 ) sin(\u03bb 1 ) 1 \u03bb 2 1 +1 0 \u2032 1 \u2032 2 \u2032 3 \u2032 2\u03bb 1 \u03bb 3 2 exp(\u03bb 1 \u03bb 3 2 ) cos(\u03bb 1 ) 0 \u2212 2\u03bb1 (\u03bb 2 1 +1) 2 \u03bb 2 1 sin(\u03bb 1 ) cos(\u03bb 2 ) 1 \u03bb 2 1 +1 exp(\u03bb 1 \u03bb 3 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "Figure 3: A functional automaton for the partial derivative of F (\u03bb 1 , \u03bb 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "\u2202F \u2202\u03bb 1 (\u03bb 1 , \u03bb 2 ). Let G(\u03bb 1 , \u03bb 2 ) = \u03bb 2 1 sin(\u03bb 1 ) 1 \u03bb 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "1 +1 be the first of the four summation terms of F . The partial derivative \u2202G \u2202\u03bb 1 can be written as a sum of three terms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "\u2202(\u03bb 2 1 ) \u2202\u03bb 1 sin(\u03bb 1 ) 1 \u03bb 2 1 +1 , \u03bb 2 1 \u2202(sin(\u03bb 1 )) \u2202\u03bb 1 1 \u03bb 2 1 +1 and \u03bb 2 1 sin(\u03bb 1 ) \u2202( 1 \u03bb 2 1 +1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": ") \u2202\u03bb 1 . Each of the summation terms differs from the original expression for G(\u03bb 1 , \u03bb 2 ) in exactly one multiplier whose partial derivative with respect to \u03bb 1 is computed. Thus in order to construct a functional automaton for \u2202F \u2202\u03bb 1 we can take two disjoint copies of the original functional automaton, see Figure 3 , and set transitions between them in order to reflect the partial derivatives with respect to \u03bb 1 of the single multipliers. The general result is presented in the following Proposition: Proposition 3.3 Let A be an AFA with k states and t transitions and let",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 320,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "A \u2032 = \u27e8Q \u2032 , q \u2032 0 , \u2206 \u2032 , T \u2032 \u27e9 be a disjoint copy of A. If the partial derivatives \u2202W \u2202\u03bb i exist for each transition q W (\u03bb 1 ,\u03bb 2 ,...,\u03bb d ) \u2212\u2192 p in A, then B = \u27e8Q \u222a Q \u2032 , q 0 , \u2206 \u222a \u2206 \u2032 \u222a {q \u2202W \u2202\u03bb i \u2192 p \u2032 | q W \u2192 p \u2208 \u2206}, T \u2032 \u27e9 is an AFA with 2k states, 3t transi- tions and F B = \u2202F A \u2202\u03bb i . Sketch of proof.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "We have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "\u2202F A \u2202\u03bb i = \u2211 \u03c0 is a successful path in A \u2202l\u03c0 \u2202\u03bb i = \u2211 \u03c0 = q 0 W 1 \u2212\u2192 q 1 . . . q m\u22121 Wm \u2212\u2192 q m is a successful path in A \u2211 j \u03c0 (j,i) , where \u03c0 (j,i) = W 1 . . . W j\u22121 \u2202W j \u2202\u03bb i W j+1 . . . W m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "There is a one-to-one correspondence between the successful paths in B and the terms \u03c0 (j,i) in the above summation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "Let us note that the construction presented in Proposition 3.3 can be iterated i times in order to build a functional automaton with 2 i k states and 3 i t transitions for each i-th order partial derivate of F A . Thus we can build functional automata with 4k states and 9t transitions for \u2202 2 F A \u2202\u03bb i \u03bb j . This gives the possibility to use some second order gradient method in the training procedure. Note that if the computation of W (\u03bb) for a given \u03bb and all label functions, W , takes constant time, then using functional automata we achieve an O(t)-time computation of both \u2202F A \u2202\u03bb i (\u03bb) and \u2202 2 F A \u2202\u03bb i \u03bb j (\u03bb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of partial derivates via AFA",
"sec_num": "3.2"
},
{
"text": "By given source sequence s we want to find best alignment s \u2192 t = argmax s\u2192t \u2032 \u2208As p\u03bb(s \u2192 t \u2032 ) = argmax s\u2192t \u2032 \u2208As e s\u2192t \u2032 (\u03bb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search procedure",
"sec_num": "4"
},
{
"text": "For this purpose we use again a standard dynamic programming procedure on the automaton A s representing the function Z s (\u03bb), Figure 2 . The only difference with the procedure described in Subsection 3.1 is that instead of summation over all transtions from the current state we need to take maximum and to mark a transition that gives the maximum. Finally the successful path of marked transitions represents a best alignment. Actually this procedure corresponds to the backward version of the Viterbi decoding algorithm (Omura, 1967) . If the computation of W (\u03bb) by given \u03bb takes time O(1) for all label functions W , then the search procedure is linear in the number of the transitions in the functional automaton.",
"cite_spans": [
{
"start": 523,
"end": 536,
"text": "(Omura, 1967)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 127,
"end": 135,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Search procedure",
"sec_num": "4"
},
{
"text": "In this Section we generalize the constructions of the automaton A s from Section 3 and 4 to the case of an arbitrary n-gram language model, n > 1. In this case h lm (s \u2192 t) = log",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "\u220f |t| i=1 p lm (t i | t i\u2212n+1 t i\u2212n+2 . . . t i\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": ". We construct an automaton representing Z s (\u03bb) as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "Firstly, we build automaton A 1 that represents the function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "Z s (\u27e80, \u03bb cg \u27e9) = \u2211 s\u2192t \u2032 \u2208As exp[\u03bb cg h cg (s \u2192 t \u2032 )]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": ". Each transition in A 1 is associated with a target word, see Figure 2 . Now we would like to add exp[\u03bb lm log(p lm (t",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 71,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "i | t i\u2212n+1 t i\u2212n+2 . . . t i\u22121 ))]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "to the label of each transition associated with t i . However the problem is that there may be multiple sequences of preceding words t i\u2212n+1 t i\u2212n+2 . . . t i\u22121 for one and the same transition. For example for n = 3 on Figure 2 for the transition associated with t i = has from state 2 to state 3 there are three different possible pairs of preceding words t i\u22122 t i\u22121 : eldest son, eldest soon and eldest sun. We overcome this problem of ambiguity by extending A 1 = \u27e8Q 1 , q 1 , \u2206 1 , T 1 \u27e9 to equivalent automaton A 2 in which for each state the sequence of n \u2212 1 preceding words is uniquely determined. The set of states of A 2 is Q 2 = {\u27e8w 1 w 2 . . . w n\u22121 , q\u27e9 | q \u2208 Q 1 and w 1 w 2 . . . w n\u22121 is a sequence of preceding words for q in A 1 }. The set of transitions of A 2 is \u2206",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 227,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "2 = {\u27e8w 1 w 2 . . . w n\u22121 , q \u2032 \u27e9 W \u2192 \u27e8w 2 . . . w n\u22121 w n , q \u2032\u2032 \u27e9 | transition q \u2032 W \u2192 q \u2032\u2032 \u2208 \u2206 1 is associ- ated with w n }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "In A 2 the transition",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "\u27e8w 1 w 2 . . . w n\u22121 , q \u2032 \u27e9 W \u2192 \u27e8w 2 . . . w n\u22121 w n , q \u2032\u2032 \u27e9 is associated with the word w n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "Finally, from A 2 we construct functional automaton A 3 that represents Z s (\u27e8\u03bb lm , \u03bb cg \u27e9) by adding exp[\u03bb lm log(p lm (w n | w 1 w 2 . . . w n\u22121 ))] to the label of each transition t where w n is the word associated with t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "If m is an upper bound for the number of correction candidates for every sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "s i s i+1 . . . s i+k , then |Q 2 | \u2264 m n\u22121 |Q 1 | and |\u2206 2 | \u2264 m n\u22121 |\u2206 1 |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram language models",
"sec_num": "5"
},
{
"text": "In this section we apply the technique developed in Sections 3 and 4 to other statistical models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "Conditional random fields. A linear-chain CRF serves to assign a label y i to each the observation x i of a given observation sequence x. We assume that the observations x i belong to a set X and the labels y i belong to a finite set Y . We shall further consider that the probability measure of a linear-chain CRF with |x| states is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "p \u03bb (y | x) = exp[ \u2211 |x| i=2 \u2211 K j=1 \u03b1 j f j (y i\u22121 , y i , x, i) + \u2211 |x| i=1 \u2211 K j=1 \u03b2 j g j (y i , x, i)] Z x (\u03bb)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "|x| = |y|, f j : Y \u00d7 Y \u00d7 X * \u00d7 N \u2192 R and g j : X * \u00d7 N \u2192 R are predefined feature func- tions, \u03bb = \u27e8\u03b1 1 , \u03b1 2 , . . . , \u03b1 K , \u03b2 1 , \u03b2 2 , . . . , \u03b2 K \u27e9 are parameters and Z x (\u03bb) = \u2211 y\u2208Y |x| exp[ \u2211 |x| i=2 \u2211 K j=1 \u03b1 j f j (y i\u22121 , y i , x, i)+ \u2211 |x| i=1 \u2211 K j=1 \u03b2 j g j (y i , x, i)].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "The training task is similar to the one described in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "We have a training corpus of N pairs \u27e8x (1) , y (1) \u27e9, \u27e8x (2) , y (2) \u27e9, . . . , \u27e8x (N ) , y (N ) \u27e9 and we need to find the parameter\u015d \u03bb = argmax \u03bb \u220f N n=1 p \u03bb (y (n) | x (n) ). Formulae very similar to (2) and (3) can be derived. Thus the main problem is again in the computation of the term Z x (\u03bb). In (Lafferty et al., 2001 ) Z x (\u03bb) is represented as an entity of a special matrix which is obtained as a product of |x| + 1 matrices of size (|Y | + 2) \u00d7 (|Y | + 2). The states of an AFA A x representing Z x (\u03bb) are as follows: a start state s, a final state f and |x| \u2022 |Y | \"intermediate\" states",
"cite_spans": [
{
"start": 40,
"end": 43,
"text": "(1)",
"ref_id": null
},
{
"start": 58,
"end": 61,
"text": "(2)",
"ref_id": null
},
{
"start": 84,
"end": 88,
"text": "(N )",
"ref_id": null
},
{
"start": 171,
"end": 174,
"text": "(n)",
"ref_id": null
},
{
"start": 305,
"end": 327,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "q i,\u03b3 , 1 \u2264 i \u2264 |x|, \u03b3 \u2208 Y . The transitions are s G \u2192 q 1,\u03b3 for G = exp \u2211 K j=1 \u03b2 j g j (\u03b3, x, 1), q i,\u03b3 \u2032 F \u2192 q i+1,\u03b3 \u2032\u2032 for F = exp \u2211 K j=1 [\u03b1 j f j (\u03b3 \u2032 , \u03b3 \u2032\u2032 , x, i + 1)+ \u03b2 j g j (\u03b3 \u2032\u2032 , x, i + 1)] and q |x|,\u03b3 1 \u2192 f .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "Transitions with label 0 can be removed from the automaton. If there are many such transitions this could significantly reduce the time for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "Hidden Markov models. We adapt the notations and the definitions from (Rabiner, 1989) . Let \u03bb = \u27e8A, B, \u03c0\u27e9 be the parameters of a HMM with R states S = {S 1 , S 2 , . . . , S R } and M distinct observation symbols",
"cite_spans": [
{
"start": 70,
"end": 85,
"text": "(Rabiner, 1989)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "V = {v 1 , v 2 , . . . , v M }, where A = {a S i S j } is a R \u00d7 R matrix of transition probabilities, B = {b S j (v k )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "are the observation symbol probability distributions and \u03c0 = {\u03c0 S j } is the initial state distribution. The probability of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "O 1 O 2 . . . O T is p \u03bb (O 1 O 2 . . . O T ) = \u2211 q 1 q 2 ...q T \u2208S T c(q 1 q 2 . . . q T ), where c(q 1 q 2 . . . q T ) = \u03c0 q 1 b q 1 (O 1 )a q 1 q 2 b q 2 (O 2 ) . . . a q T \u22121 q T b q T (O T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "Given a training set of N observations O (1) , O (2) , . . . , O (N ) the optimal parameters\u03bb = argmax \u03bb \u220f N n=1 p \u03bb (O (n) ) have to be determined under the stohastic constraints",
"cite_spans": [
{
"start": 65,
"end": 69,
"text": "(N )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "\u2211 j a S i S j = 1, \u2211 k b S j (v k ) = 1 and \u2211 j \u03c0 S j = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "Applying the method of Lagrange multipliers we obtain a new function F (\u03bb, \u03b1, \u03b2, \u03b3) ",
"cite_spans": [
{
"start": 71,
"end": 83,
"text": "(\u03bb, \u03b1, \u03b2, \u03b3)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "= \u220f N n=1 p \u03bb (O (n) )+ \u2211 i \u03b1 i [( \u2211 j a S i S j ) \u2212 1] + \u2211 i \u03b2 i [( \u2211 k b S j (v k )) \u2212 1]+ \u03b3[( \u2211 j \u03c0 S j ) \u2212 1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "For each training observation sequence O (n) with T (n) symbols the function p \u03bb (O (n) ) can be represented by an AFA A O (n) with RT (n) + 2 states, R(T (n) + 1) transitions and a single final state as follows. We have the start state s, the final state f and RT (n) (n) ). The union of two automata representing functions F 1 and F 2 gives an automaton for the function F 1 + F 2 . So using unions and concatenations we obtain one AFA (with a single final state) representing the function F (\u03bb, \u03b1, \u03b2, \u03b3) . We can directly construct functional automata for the partial derivatives of F (first order and if needed second order), see Proposition 3.3. Thus we can use a gradient method to find a local extremum of F .",
"cite_spans": [
{
"start": 265,
"end": 268,
"text": "(n)",
"ref_id": null
},
{
"start": 269,
"end": 272,
"text": "(n)",
"ref_id": null
},
{
"start": 494,
"end": 506,
"text": "(\u03bb, \u03b1, \u03b2, \u03b3)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "\"in- termediate\" states q t,S i , 1 \u2264 t \u2264 T (n) , 1 \u2264 i \u2264 R. The transitions are s \u03c0 S i b S i (O (n) 1 ) \u2212\u2192 q 1,S i , q t,S i a S i S j b S j (O (n) t+1 ) \u2212\u2192 q t+1,S j and q T (n) ,S i 1 \u2192 f . The concatenation of all N automata A O (n) gives one automaton representing \u220f N n=1 p \u03bb (O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other statistical models",
"sec_num": "6"
},
{
"text": "In this section we evaluate the quality of a noisy text normalization system that uses the log-linear model presented in Section 2. The system uses a globally convergent variant of the conjugate gradient method, (Gilbert and Nocedal, 1992) . The computation of the gradient and the values of the objective function is implemented with functional automata. We test the system on two tasks: (i) OCR-postcorrection of the TREC-5 Confusion Track corpus 1 and (ii) normalization of the 1641 Depositions 2 -a collection of highly non-standard 17th century documents in Early Modern English, (Sweetnam, 2011), digitized at the Trinity College Dublin.",
"cite_spans": [
{
"start": 212,
"end": 239,
"text": "(Gilbert and Nocedal, 1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "7"
},
{
"text": "For the task (i) we use a parallel corpus of 30000 training pairs (s, t), where s is a document produced by an OCR system and t is the corrected variant of s. The 30000 pairs were randomly selected from the TREC-5 corpus that has about 5% error on character level. We use 25000 pairs as a training set and the remaining 5000 pairs serve as a test set. With a heruistic dynamic programming algorithm we automatically converted all these 25000 pairs (s, t) into normalization alignments s \u2192 t, see Section 2. We use these alignments to train (a) a candidate generator, (b) smoothed 2gram language model, to find (c) statistics for the length of the left side of a normalization pair and (d) statistics for normalization pairs with equal left and right sides. Our log-linear model has four feature functions induced by (a), (b), (c) and (d). As a candidate generator we use a variant of the algorithm presented in (Gerdjikov et al., 2013) . The word error (WER) rate between s and t in the test set of 5000 pairs is 22.10% and the BLEU (Papineni et al., 2002) is 58.44%. In Table 3 we compare the performace of our log-linear model with four feature functions against a baseline where we use only one feature function, which encodes the candidate generator. Table 3 shows that the combination of the four features reduces more than twice the WER. Precision and recall, obtained on the TREC 5 dataset, for different candidate gener- ators can be found in Gerdjikov et al., 2013) . To test our system on the task of normalization of the 1641 Depositions, we use a corpus of 500 manually created normalization alignments s \u2192 t, where s is a document in Early Modern English from the 1641 Depositions and t is the normalization of s in contemporary English. We train our system on 450 documents and test it on the other 50.",
"cite_spans": [
{
"start": 911,
"end": 935,
"text": "(Gerdjikov et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 1033,
"end": 1056,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF16"
},
{
"start": 1451,
"end": 1474,
"text": "Gerdjikov et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 1071,
"end": 1078,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1255,
"end": 1262,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "7"
},
{
"text": "We use five feature functions: (b), (c) and (d) as above and two language models: (a1) one 2-gram language model trained on part of the normalized training documents and (a2) another 2-gram language model trained on large corpus of documents extracted from the entire Gutenberg English language corpus 3 . We obtain WER 5.37% and BLEU 89.34%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "7"
},
{
"text": "In this paper we considered a general framework for the realization of statistical models. We showed a novel construction proving that the class of functional automata is closed under taking partial derivatives. Thus the functional automata yield efficient training and search procedures using only the usual sum and product operations on real numbers. We illustrated the power of this mechanism in the cases of CRF's and HMM's, LLM's and ngram language models. Similar constructions can be applied for the realization of other methods, for example MERT (Och, 2003) .",
"cite_spans": [
{
"start": 554,
"end": 565,
"text": "(Och, 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We presented a noisy text normalization system based on functional automata and evaluated its quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "http://www.gutenberg.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research work reported in the paper is supported by the CULTURA project, grant 269973, funded by the FP7 Programme (STReP) and Project BG051PO001-3.3.06-0022/19.03.2012.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The mathematics of statistical machine translation: parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mer- cer, 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th annual meeting on Association for Computational Linguistics, ACL '96",
"volume": "",
"issue": "",
"pages": "310--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F. Chen and Joshua Goodman. 1996. An em- pirical study of smoothing techniques for language modeling. Proceedings of the 34th annual meeting on Association for Computational Linguistics, ACL '96, 310-318.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generalized iterative scaling for log-linear models",
"authors": [
{
"first": "J",
"middle": [
"N"
],
"last": "Darroch",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ratcliff",
"suffix": ""
}
],
"year": 1972,
"venue": "The Annals of Mathematical Statistics",
"volume": "43",
"issue": "",
"pages": "1470--1480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. N. Darroch and D. Ratcliff. 1972. Generalized it- erative scaling for log-linear models. The Annals of Mathematical Statistics, 43:1470-1480.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Parameter Estimation for Probabilistic Finite-State Transducers",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on Association for Computational Linguistics ACL '02",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 2002. Parameter Estimation for Proba- bilistic Finite-State Transducers. Proceedings of the 40th annual meeting on Association for Computa- tional Linguistics ACL '02, 1-8.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Extraction of spelling variations from language structure for noisy text correction",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Gerdjikov",
"suffix": ""
},
{
"first": "Stoyan",
"middle": [],
"last": "Mihov",
"suffix": ""
},
{
"first": "Vladislav",
"middle": [],
"last": "Nenchev",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference on Document Analysis and Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Gerdjikov, Stoyan Mihov, and Vladislav Nenchev. 2013. Extraction of spelling varia- tions from language structure for noisy text correc- tion. Proceedings of the International Conference on Document Analysis and Recognition",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Global Convergence Properties of Conjugate Gradient Methods for Optimization",
"authors": [],
"year": null,
"venue": "SIAM Journal on Optimization",
"volume": "2",
"issue": "1",
"pages": "21--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Global Convergence Properties of Conjugate Gra- dient Methods for Optimization. SIAM Journal on Optimization, 2(1):21-42.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Introduction to Automata Theory, Languages, and Computation",
"authors": [
{
"first": "John",
"middle": [
"E"
],
"last": "Hopcroft",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John E. Hopcroft and Jeffrey D. Ullman. 1979. Intro- duction to Automata Theory, Languages, and Com- putation. Addison-Wesley Publishing Company.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hidden Markov Models for Speech Recognition",
"authors": [
{
"first": "B",
"middle": [
"H"
],
"last": "Juang",
"suffix": ""
},
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1991,
"venue": "Technometrics",
"volume": "33",
"issue": "3",
"pages": "251--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. H. Juang and L. R. Rabiner. 1991. Hidden Markov Models for Speech Recognition. Technometrics, 33(3):251-272.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. Proceed- ings of the 2003 Conference of the North American Chapter of the Association for Computational Lin- guistics on Human Language Technology -Volume 1, 48-54",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Label- ing Sequence Data. Proceedings of the Eigh- teenth International Conference on Machine Learn- ing, ICML '01, 282-289.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "First-and secondorder expectation semirings with applications to minimum-risk training on translation forests",
"authors": [
{
"first": "Zhifei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "40--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhifei Li and Jason Eisner 2009. First-and second- order expectation semirings with applications to minimum-risk training on translation forests. Pro- ceedings of the 2009 Conference on Empirical Meth- ods in Natural Language Processing: Volume 1 - Volume 1, EMNLP '09, 40-51.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using automated error profiling of texts for improved selection of correction candidates for garbled tokens",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mihov",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mitankin",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gotscharek",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Reffle",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "K",
"middle": [
"U"
],
"last": "Ringlstetter",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "456--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Mihov, P. Mitankin, A. Gotscharek, U. Reffle, C. Schulz, and K. U. Ringlstetter. 2007. Using au- tomated error profiling of texts for improved selec- tion of correction candidates for garbled tokens. AI 2007: Advances in Artificial Intelligence, 456-465.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Finite-state transducers in language and speech processing",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "2",
"pages": "269--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri. 1997. Finite-state transducers in lan- guage and speech processing. Computational Lin- guistics, 23(2): 269-311.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics -Volume 1, 160-167.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On the Viterbi decoding algorithm",
"authors": [
{
"first": "J",
"middle": [],
"last": "Omura",
"suffix": ""
}
],
"year": 1967,
"venue": "IEEE Transactions on Information Theory",
"volume": "13",
"issue": "2",
"pages": "260--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Omura. 1967. On the Viterbi decoding algo- rithm. IEEE Transactions on Information Theory, 13(2):260-269.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BLEU: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, 311-318.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A tutorial on HMM and selected applications in speech recognition",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the IEEE",
"volume": "77",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Rabiner. 1989. A tutorial on HMM and se- lected applications in speech recognition. Proceed- ings of the IEEE, 77(2):257-286.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Efficiently generating correction suggestions for garbled tokens of historical language",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Reffle",
"suffix": ""
}
],
"year": 2011,
"venue": "Natural Language Engineering",
"volume": "17",
"issue": "02",
"pages": "265--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrich Reffle. 2011. Efficiently generating correc- tion suggestions for garbled tokens of historical lan- guage. Natural Language Engineering, 17(02):265- 282.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Fast selection of small and precise candidate sets from dictionaries for text correction tasks",
"authors": [
{
"first": "K",
"middle": [
"U"
],
"last": "Schulz",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mihov",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mitankin",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the International Conference on Document Analysis and Recognition",
"volume": "",
"issue": "",
"pages": "471--475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. U. Schulz, S. Mihov, and P. Mitankin, 2007. Fast selection of small and precise candidate sets from dictionaries for text correction tasks, Proceedings of the International Conference on Document Analysis and Recognition 471-475.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Shallow parsing with conditional random fields",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "134--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Sha and Fernando Pereira, 2003. Shallow pars- ing with conditional random fields, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology -Volume 1, 134-141.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Natural language processing and early-modern dirty data: applying IBM Languageware to the 1641 depositions. Literary and Linguistic Computing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Barbara",
"middle": [
"A"
],
"last": "Sweetnam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fennell",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "27",
"issue": "",
"pages": "39--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark S. Sweetnam and Barbara A. Fennell. 2011. Natural language processing and early-modern dirty data: applying IBM Languageware to the 1641 depositions. Literary and Linguistic Computing, 27(1):39-54",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "The functional automaton A theldest sonn hath bin kild is obtained by removing the words from the transition labels. date c for the i-th source word s i and has a label exp[\u03bb cg log(p cg (c | s i )) + \u03bb lm log(p lm (c))]",
"uris": null
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>source word</td><td>set of target candidates</td></tr><tr><td>theldest</td><td>{\u27e8the eldest, 0.75\u27e9, \u27e8eldest, 0.25\u27e9}</td></tr><tr><td>sonn</td><td>{\u27e8son, 0.</td></tr></table>",
"html": null,
"type_str": "table",
"text": "92593\u27e9, \u27e8soon, 0.03704\u27e9, \u27e8sun, 0.03704\u27e9} hath {\u27e8hat, 0.0088\u27e9, \u27e8hats, 0.0044\u27e9, \u27e8has, 0.9868\u27e9} bin {\u27e8bin, 0.1\u27e9, \u27e8been, 0.8\u27e9, \u27e8bean, 0.1\u27e9} kild {\u27e8kid, 0.01\u27e9, \u27e8killed, 0.99\u27e9}"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>: Source words and their corresponding set</td></tr><tr><td>of candidates provided by the candidate generator.</td></tr><tr><td>Each target candidate c for the source word s i is</td></tr><tr><td>associated with a probability p cg (c | s i ).</td></tr></table>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF2": {
"num": null,
"content": "<table><tr><td/><td/><td>0</td></tr><tr><td>6</td><td/><td/></tr><tr><td colspan=\"2\">cg log(1)]</td><td>1</td></tr><tr><td>son/</td><td/><td>soon/</td><td>sun/</td></tr><tr><td>exp[\u03bb lm log(0.0003)</td><td colspan=\"2\">exp[\u03bb lm log(0.0005)</td><td>exp[\u03bb lm log(0.0002)</td></tr><tr><td>+\u03bb cg log(0.92593)]</td><td colspan=\"2\">+\u03bb cg log(0.03704)]</td><td>+\u03bb cg log(0.03704)]</td></tr><tr><td/><td/><td>2</td></tr><tr><td>hat/</td><td/><td>hats/</td><td>has/</td></tr><tr><td>exp[\u03bb lm log(0.0001)</td><td colspan=\"3\">exp[\u03bb lm log(0.00002)</td><td>exp[\u03bb lm log(0.002)</td></tr><tr><td>+\u03bb cg log(0.0088)]</td><td colspan=\"2\">+\u03bb cg log(0.0044)]</td><td>+\u03bb cg log(0.9868)]</td></tr><tr><td/><td/><td>3</td></tr><tr><td>bin/</td><td/><td>been/</td><td>bean/</td></tr><tr><td>exp[\u03bb lm log(0.000005)</td><td colspan=\"2\">exp[\u03bb lm log(0.003)</td><td>exp[\u03bb lm log(0.000005)</td></tr><tr><td>+\u03bb cg log(0.1)]</td><td colspan=\"2\">+\u03bb cg log(0.8)]</td><td>+\u03bb cg log(0.1)]</td></tr><tr><td/><td/><td>4</td></tr><tr><td>kid/</td><td/><td/><td>killed/</td></tr><tr><td colspan=\"2\">exp[\u03bb lm log(0.00002)</td><td colspan=\"2\">exp[\u03bb lm log(0.0001)</td></tr><tr><td colspan=\"2\">+\u03bb cg log(0.01)]</td><td colspan=\"2\">+\u03bb cg log(0.99)]</td></tr><tr><td/><td/><td>5</td></tr></table>",
"html": null,
"type_str": "table",
"text": "parameterized with \u03bb lm and \u03bb cg . Specifically, each path from state i \u2212 1 to state i, 1 \u2264 i \u2264 |s|, corresponds to a target candi-the/ exp[\u03bb lm log(0.017) +\u03bb cg log(0.75)] eldest/ exp[\u03bb lm log(0.00002) +\u03bb cg log(0.25)] eldest/ exp[\u03bb lm log(0.00002) +\u03bb"
},
"TABREF3": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "http://trec.nist.gov/pubs/trec5/t5 proceedings.html 2 http://1641.tcd.ie Log-linear model WER BLEU only candidate generator 6.81% 85.24% candidate generator + language model 3.27% 92.82% + other features"
},
"TABREF4": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Only candidate generator vs. candidate generator + other features. OCR-postcorrection of the TREC-5 corpus."
}
}
}
}