| { |
| "paper_id": "P15-1021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:09:52.227860Z" |
| }, |
| "title": "Efficient Top-Down BTG Parsing for Machine Translation Preordering", |
| "authors": [ |
| { |
| "first": "Tetsuji", |
| "middle": [], |
| "last": "Nakagawa", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Google Japan Inc", |
| "location": {} |
| }, |
| "email": "tnaka@google.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present an efficient incremental topdown parsing method for preordering based on Bracketing Transduction Grammar (BTG). The BTG-based preordering framework (Neubig et al., 2012) can be applied to any language using only parallel text, but has the problem of computational efficiency. Our top-down parsing algorithm allows us to use the early update technique easily for the latent variable structured Perceptron algorithm with beam search, and solves the problem. Experimental results showed that the topdown method is more than 10 times faster than a method using the CYK algorithm. A phrase-based machine translation system with the top-down method had statistically significantly higher BLEU scores for 7 language pairs without relying on supervised syntactic parsers, compared to baseline systems using existing preordering methods.", |
| "pdf_parse": { |
| "paper_id": "P15-1021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present an efficient incremental topdown parsing method for preordering based on Bracketing Transduction Grammar (BTG). The BTG-based preordering framework (Neubig et al., 2012) can be applied to any language using only parallel text, but has the problem of computational efficiency. Our top-down parsing algorithm allows us to use the early update technique easily for the latent variable structured Perceptron algorithm with beam search, and solves the problem. Experimental results showed that the topdown method is more than 10 times faster than a method using the CYK algorithm. A phrase-based machine translation system with the top-down method had statistically significantly higher BLEU scores for 7 language pairs without relying on supervised syntactic parsers, compared to baseline systems using existing preordering methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The difference of the word order between source and target languages is one of major problems in phrase-based statistical machine translation. In order to cope with the issue, many approaches have been studied. Distortion models consider word reordering in decoding time using such as distance (Koehn et al., 2003) and lexical information (Tillman, 2004) . Another direction is to use more complex translation models such as hierarchical models (Chiang, 2007) . However, these approaches suffer from the long-distance reordering issue and computational complexity.", |
| "cite_spans": [ |
| { |
| "start": 294, |
| "end": 314, |
| "text": "(Koehn et al., 2003)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 339, |
| "end": 354, |
| "text": "(Tillman, 2004)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 445, |
| "end": 459, |
| "text": "(Chiang, 2007)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Preordering (reordering-as-preprocessing) (Xia and McCord, 2004; Collins et al., 2005) is another approach for tackling the problem, which modifies the word order of an input sentence in a source language to have the word order in a target language (Figure 1(a) ).", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 64, |
| "text": "(Xia and McCord, 2004;", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 65, |
| "end": 86, |
| "text": "Collins et al., 2005)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 249, |
| "end": 261, |
| "text": "(Figure 1(a)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Various methods for preordering have been studied, and a method based on Bracketing Transduction Grammar (BTG) was proposed by Neubig et al. (2012) . It reorders source sentences by handling sentence structures as latent variables. The method can be applied to any language using only parallel text. However, the method has the problem of computational efficiency.", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 147, |
| "text": "Neubig et al. (2012)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we propose an efficient incremental top-down BTG parsing method which can be applied to preordering. Model parameters can be learned using latent variable Perceptron with the early update technique (Collins and Roark, 2004) , since the parsing method provides an easy way for checking the reachability of each parser state to valid final states. We also try to use forced-decoding instead of word alignment based on Expectation Maximization (EM) algorithms in order to create better training data for preordering. In experiments, preordering using the topdown parsing algorithm was faster and gave higher BLEU scores than BTG-based preordering using the CYK algorithm. Compared to existing preordering methods, our method had better or comparable BLEU scores without using supervised parsers.", |
| "cite_spans": [ |
| { |
| "start": 213, |
| "end": 238, |
| "text": "(Collins and Roark, 2004)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Many preordering methods which use syntactic parse trees have been proposed, because syntactic information is useful for determining the word order in a target language, and it can be used to restrict the search space against all the possible permutations. Preordering methods using manually created rules on parse trees have been studied (Collins et al., 2005; Xu et al., 2009) , but linguistic knowledge for a language pair is necessary to create such rules. Preordering methods which automatically create reordering rules or utilize statistical classifiers have also been studied (Xia and McCord, 2004; Li et al., 2007; Genzel, 2010; Visweswariah et al., 2010; Miceli Barone and Attardi, 2013; Lerner and Petrov, 2013; Jehl et al., 2014) . These methods rely on source-side parse trees and cannot be applied to languages where no syntactic parsers are available.", |
| "cite_spans": [ |
| { |
| "start": 339, |
| "end": 361, |
| "text": "(Collins et al., 2005;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 362, |
| "end": 378, |
| "text": "Xu et al., 2009)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 583, |
| "end": 605, |
| "text": "(Xia and McCord, 2004;", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 606, |
| "end": 622, |
| "text": "Li et al., 2007;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 623, |
| "end": 636, |
| "text": "Genzel, 2010;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 637, |
| "end": 663, |
| "text": "Visweswariah et al., 2010;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 664, |
| "end": 696, |
| "text": "Miceli Barone and Attardi, 2013;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 697, |
| "end": 721, |
| "text": "Lerner and Petrov, 2013;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 722, |
| "end": 740, |
| "text": "Jehl et al., 2014)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preordering for Machine Translation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "There are preordering methods that do not need parse trees. They are usually trained only on automatically word-aligned parallel text. It is possible to mine parallel text from the Web (Uszkoreit et al., 2010; Antonova and Misyurev, 2011) , and the preordering systems can be trained without manually annotated language resources. Tromble and Eisner (2009) studied preordering based on a Linear Ordering Problem by defining a pairwise preference matrix. Khalilov and Sima'an (2010) proposed a method which swaps adjacent two words using a maximum entropy model. Visweswariah et al. (2011) regarded the preordering problem as a Traveling Salesman Problem (TSP) and applied TSP solvers for obtaining reordered words. These methods do not consider sentence structures.", |
| "cite_spans": [ |
| { |
| "start": 185, |
| "end": 209, |
| "text": "(Uszkoreit et al., 2010;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 210, |
| "end": 238, |
| "text": "Antonova and Misyurev, 2011)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 331, |
| "end": 356, |
| "text": "Tromble and Eisner (2009)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 454, |
| "end": 481, |
| "text": "Khalilov and Sima'an (2010)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 562, |
| "end": 588, |
| "text": "Visweswariah et al. (2011)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preordering for Machine Translation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "DeNero and Uszkoreit (2011) presented a preordering method which builds a monolingual parsing model and a tree reordering model from parallel text. Neubig et al. (2012) proposed to train a discriminative BTG parser for preordering directly from word-aligned parallel text by handling underlying parse trees with latent variables. This method is explained in detail in the next subsection. These two methods can use sentence structures for designing feature functions to score permutations. ", |
| "cite_spans": [ |
| { |
| "start": 148, |
| "end": 168, |
| "text": "Neubig et al. (2012)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preordering for Machine Translation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Neubig et al. 2012proposed a BTG-based preordering method. Bracketing Transduction Grammar (BTG) (Wu, 1997) is a binary synchronous context-free grammar with only one non-terminal symbol, and has three types of rules ( Figure 2 ): Straight which keeps the order of child nodes, Inverted which reverses the order, and Terminal which generates a terminal symbol. 1 BTG can express word reordering. For example, the word reordering in Figure 1 (a) can be represented with the BTG parse tree in Figure 1 (b). 2 Therefore, the task to reorder an input source sentence can be solved as a BTG parsing task to find an appropriate BTG tree.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 107, |
| "text": "(Wu, 1997)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 361, |
| "end": 362, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 219, |
| "end": 227, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 432, |
| "end": 440, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 491, |
| "end": 499, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "BTG-based Preordering", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In order to find the best BTG tree among all the possible ones, a score function is defined. Let \u03a6(m) denote the vector of feature functions for the BTG tree node m, and \u039b denote the vector of feature weights. Then, for a given source sentence x, the best BTG tree\u1e91 and the reordered sentence x \u2032 can be obtained as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BTG-based Preordering", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "z = argmax z\u2208Z(x) \u2211 m\u2208N odes(z) \u039b \u2022 \u03a6(m), (1) x \u2032 = P roj(\u1e91),", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "BTG-based Preordering", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "where Z(x) is the set of all the possible BTG trees for x, N odes(z) is the set of all the nodes in the tree z, and P roj(z) is the function which generates a reordered sentence from the BTG tree z.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BTG-based Preordering", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The method was shown to improve translation performance. However, it has a problem of processing speed. The CYK algorithm, whose computational complexity is O(n 3 ) for a sen-1 Although Terminal produces a pair of source and target words in the original BTG (Wu, 1997) , the target-side words are ignored here because both the input and the output of preordering systems are in the source language. In (Wu, 1997) , (DeNero and Uszkoreit, 2011) and (Neubig et al., 2012) , Terminal can produce multiple words. Here, we produce only one word.", |
| "cite_spans": [ |
| { |
| "start": 258, |
| "end": 268, |
| "text": "(Wu, 1997)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 402, |
| "end": 412, |
| "text": "(Wu, 1997)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 415, |
| "end": 443, |
| "text": "(DeNero and Uszkoreit, 2011)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 448, |
| "end": 469, |
| "text": "(Neubig et al., 2012)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BTG-based Preordering", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "2 There may be more than one BTG tree which represents the same word reordering (e.g., the word reordering C3B2A1 to A1B2C3 has two possible BTG trees), and there are permutations which cannot be represented with BTG (e.g., B2D4A1C3 to A1B2C3D4, which is called the 2413 pattern). tence of length n, is used to find the best parse tree. Furthermore, due to the use of a complex loss function, the complexity at training time is O(n 5 ) (Neubig et al., 2012) . Since the computational cost is prohibitive, some techniques like cube pruning and cube growing have been applied (Neubig et al., 2012; Na and Lee, 2013) . In this study, we propose a top-down parsing algorithm in order to achieve fast BTG-based preordering.", |
| "cite_spans": [ |
| { |
| "start": 436, |
| "end": 457, |
| "text": "(Neubig et al., 2012)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 574, |
| "end": 595, |
| "text": "(Neubig et al., 2012;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 596, |
| "end": 613, |
| "text": "Na and Lee, 2013)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BTG-based Preordering", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "0) \u27e8[[0, 5)], [], 0\u27e9 (1) \u27e8[[0, 2), [2, 5)], [(2, S)], v1\u27e9 (2) \u27e8[[0, 2), [3, 5)], [(2, S), (3, I)], v2\u27e9 (3) \u27e8[[0, 2)], [(2, S), (3, I), (4, I)], v3\u27e9 (4) \u27e8[], [(2, S), (3, I), (4, I), (1, S)], v4\u27e9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BTG-based Preordering", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Top-Down BTG Parsing", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preordering with Incremental", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We explain an incremental top-down BTG parsing algorithm using Figure 3 , which illustrates how a parse tree is built for the example sentence in Figure 1. At the beginning, a tree (span) which covers all the words in the sentence is considered. Then, a span which covers more than one word is split in each step, and the node type (Straight or Inverted) for the splitting point is determined. The algorithm terminates after (n \u2212 1) iterations for a sentence with n words, because there are (n \u2212 1) positions which can be split. We consider that the incremental parser has a parser state in each step, and define the state as a triple \u27e8P, C, v\u27e9. P is a stack of unresolved spans. A span denoted by [p, q) covers the words", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 63, |
| "end": 71, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 146, |
| "end": 152, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "x p \u2022 \u2022 \u2022 x q\u22121 for an input word sequence x = x 0 \u2022 \u2022 \u2022 x |x|\u22121 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "C is a list of past parser actions. A parser action denoted by (r, o) represents the action to split a span at the position between x r\u22121 and x r with the node type o \u2208 {S, I}, where S and I indicate Straight and Inverted respectively. v is the score of the state, which is the sum of the Input: Sentence x, feature weights \u039b, beam width k. Output: BTG parse tree.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "1:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "S0 \u2190 {\u27e8[[0, |x|)], [], 0\u27e9 } // Initial state. 2: for i := 1, \u2022 \u2022 \u2022 , |x| \u2212 1 do 3: S \u2190 {} // Set of the next states. 4: foreach s \u2208 Si\u22121 do 5: S \u2190 S \u222a \u03c4x,\u039b(s) // Generate next states. 6: Si \u2190 T op k (S) // Select k-best states. 7:\u015d = argmax s\u2208S |x|\u22121 Score(s) 8: return T ree(\u015d) 9: function \u03c4x,\u039b(\u27e8P, C, v\u27e9) 10: [p, q) \u2190 P.pop() 11: S \u2190 {} 12: for r := p + 1, \u2022 \u2022 \u2022 , q do 13: P \u2032 \u2190 P 14:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "if r \u2212 p > 1 then 15:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "P \u2032 .push([p, r)) 16:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "if q \u2212 r > 1 then 17: scores for the nodes constructed so far. Parsing starts with the initial state \u27e8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "P \u2032 .push([r, q)) 18: v S \u2190 v + \u039b \u2022 \u03a6(x, C, p, q, r, S) 19: v I \u2190 v + \u039b \u2022 \u03a6(x, C, p, q, r, I) 20: C S \u2190 C; C S .append((r, S)) 21: C I \u2190 C; C I .append((r, I)) 22: S \u2190 S \u222a {\u27e8P \u2032 , C S , v S \u27e9, \u27e8P \u2032 , C I , v I \u27e9} 23: return S", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "[[0, |x|)], [], 0\u27e9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": ", because there is one span covering all the words at the beginning. In each step, a span is popped from the top of the stack, and a splitting point in the span and its node type are determined. The new spans generated by the split are pushed onto the stack if their lengths are greater than 1, and the action is added to the list. On termination, the parser has the final state", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u27e8[], [c 0 , \u2022 \u2022 \u2022 , c |x|\u22122 ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": ", v\u27e9, because the stack is empty and there are (|x| \u2212 1) actions in total. The parse tree can be obtained from the list of actions. Table 1 shows the parser state for each step in Figure 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 132, |
| "end": 139, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 180, |
| "end": 188, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The top-down parsing method can be used with beam search as shown in Figure 4 . \u03c4 x,\u039b (s) is a function which returns the set of all the possible next states for the state s. T op k (S) returns the top k states from S in terms of their scores, Score(s) returns the score of the state s, and T ree(s) returns the BTG parse tree constructed from s. \u03a6(x, C, p, q, r, o) is the feature vector for the node created by splitting the span [p, q) at r with the node type o, and is explained in Section 3.3.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 69, |
| "end": 77, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parsing Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Model parameters \u039b are estimated from training examples. We assume that each training example consists of a sentence x and its word order in a target language y = y 0 \u2022 \u2022 \u2022 y |x|\u22121 , where y i is the position of x i in the target language. For example, the example sentence in Figure 1 (a) will have y = 0, 1, 4, 3, 2. y can have ambiguities. Multiple words can be reordered to the same position on the target side. The words whose target positions are unknown are indicated by position \u22121, and we consider such words can appear at any position. 3 For example, the word alignment in Figure 5 gives the target side word positions y = \u22121, 2, 1, 0, 0.", |
| "cite_spans": [ |
| { |
| "start": 546, |
| "end": 547, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 277, |
| "end": 285, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 583, |
| "end": 591, |
| "text": "Figure 5", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Statistical syntactic parsers are usually trained on tree-annotated corpora. However, corpora annotated with BTG parse trees are unavailable, and only the gold standard permutation y is available. Neubig et al. (2012) proposed to train BTG parsers for preordering by regarding BTG trees behind word reordering as latent variables, and we use latent variable Perceptron (Sun et al., 2009) together with beam search. In latent variable Perceptron, among the examples whose latent variables are compatible with a gold standard label, the one with the highest score is picked up as a positive example. Such an approach was used for parsing with multiple correct actions (Goldberg and Elhadad, 2010; Sartorio et al., 2013) . Figure 6 describes the training algorithm. 4 \u03a6(x, s) is the feature vector for all the nodes in the partial parse tree at the state s, and \u03c4 x,\u039b,y (s) is the set of all the next states for the state s. The algorithm adopts the early update technique (Collins and Roark, 2004) which terminates incremental parsing if a correct state falls off the beam, and there is no possibility to obtain a correct output. Huang et al. (2012) proposed the violationfixing Perceptron framework which is guaranteed to converge even if inexact search is used, and also showed that early update is a special case of the framework. We define that a parser state is valid if the state can reach a final state whose BTG parse tree is compatible with y. Since this is a latent variable setting in which multiple states can reach correct final states, early update occurs when all the valid states fall off the beam (Ma et al., 2013; Yu et al., 2013) . In order to use early update, we need to check the validity of each parser state. We extend the parser state to the four tuple \u27e8P, A, v, w\u27e9, where w \u2208 {true, false} is the validity of the state. We remove training examples which cannot be represented with BTG beforehand and set w of the initial state to true. The function V alid(s) in Figure 6 returns the validity of state s. One advantage of the top-down parsing algorithm is that it is easy to track the validity of each state. The validity of a state can be calculated using the following property, and we can implement the function \u03c4 x,\u039b,y (s) by modifying the function \u03c4 x,\u039b (s) in Figure 4 .", |
| "cite_spans": [ |
| { |
| "start": 197, |
| "end": 217, |
| "text": "Neubig et al. (2012)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 369, |
| "end": 387, |
| "text": "(Sun et al., 2009)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 666, |
| "end": 694, |
| "text": "(Goldberg and Elhadad, 2010;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 695, |
| "end": 717, |
| "text": "Sartorio et al., 2013)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 970, |
| "end": 995, |
| "text": "(Collins and Roark, 2004)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1128, |
| "end": 1147, |
| "text": "Huang et al. (2012)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1612, |
| "end": 1629, |
| "text": "(Ma et al., 2013;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 1630, |
| "end": 1646, |
| "text": "Yu et al., 2013)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 720, |
| "end": 728, |
| "text": "Figure 6", |
| "ref_id": "FIGREF6" |
| }, |
| { |
| "start": 1986, |
| "end": 1994, |
| "text": "Figure 6", |
| "ref_id": "FIGREF6" |
| }, |
| { |
| "start": 2289, |
| "end": 2297, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Lemma 1. When a valid state s, which has [p, q) in the top of the stack, transitions to a state s \u2032 by the action (r, o), s \u2032 is also valid if and only if the following condition holds:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2200i \u2208 {p, \u2022 \u2022 \u2022 , r \u2212 1} y i = \u22121 \u2228 \u2200i \u2208 {r, \u2022 \u2022 \u2022 , q \u2212 1} y i = \u22121 \u2228 ( o = S \u2227 max i=p,\u2022\u2022\u2022 ,r\u22121 y i \u0338 =\u22121 y i \u2264 min i=r,\u2022\u2022\u2022 ,q\u22121 y i \u0338 =\u22121 y i ) \u2228 ( o = I \u2227 max i=r,\u2022\u2022\u2022 ,q\u22121 y i \u0338 =\u22121 y i \u2264 min i=p,\u2022\u2022\u2022 ,r\u22121 y i \u0338 =\u22121 y i ) .", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Learning Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Proof. Let \u03c0 i denote the position of x i after reordering by BTG parsing. If Condition (3) does not hold, there are i and j which satisfy \u03c0 i < \u03c0 j \u2227 y i > y j \u2227 y i \u0338 = \u22121 \u2227 y j \u0338 = \u22121, and \u03c0 i and \u03c0 j are not compatible with y. Therefore, s \u2032 is valid only if Condition (3) holds. When Condition (3) holds, a valid permutation can be obtained if the spans [p, r) and [r, q) are BTG-parsable. They are BTG-parsable as shown below. Let us assume that y does not have ambiguities. The class of the permutations which can be represented by BTG is known as separable permutations in combinatorics. It can be proven (Bose et al., 1998 ) that a permutation is a separable permutation if and only if it contains neither the 2413 nor the 3142 patterns. Since s is valid, y is a separable permutation. y does not contain the 2413 nor the 3142 patterns, and any subsequence of y also does not contain the patterns. Thus, [p, r) and [r, q) are separable permutations. The above argument holds even if y has ambiguities (duplicated positions or unaligned words). In such a case, we can always make a word order y \u2032 which specializes y and has no ambiguities (e.g., y \u2032 = 2, 1.0, 0.0, 0.1, 1.1 for y = \u22121, 1, 0, 0, 1), because s is valid, and there is at least one BTG parse tree which licenses y. Any subsequence in For dependency parsing and constituent parsing, incremental bottom-up parsing methods have been studied (Yamada and Matsumoto, 2003; Nivre, 2004; Goldberg and Elhadad, 2010; Sagae and Lavie, 2005) . Our top-down approach is contrastive to the bottom-up approaches. In the bottom-up approaches, spans which cover individual words are considered at the beginning, then they are merged into larger spans in each step, and a span which covers all the words is obtained at the end. In the top-down approach, a span which covers all the words is considered at the beginning, then spans are split into smaller spans in each step, and spans which cover individual words are obtained at the end. The top-down BTG parsing method has the advantage that the validity of parser states can be easily tracked.", |
| "cite_spans": [ |
| { |
| "start": 613, |
| "end": 631, |
| "text": "(Bose et al., 1998", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1410, |
| "end": 1438, |
| "text": "(Yamada and Matsumoto, 2003;", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 1439, |
| "end": 1451, |
| "text": "Nivre, 2004;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 1452, |
| "end": 1479, |
| "text": "Goldberg and Elhadad, 2010;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1480, |
| "end": 1502, |
| "text": "Sagae and Lavie, 2005)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The computational complexity of the top-down parsing algorithm is O(kn 2 ) for sentence length n and beam width k, because in Line 5 of Figure 4 , which is repeated at most k(n \u2212 1) times, at most 2(n \u2212 1) parser states are generated, and their scores are calculated. The learning algorithm uses the same decoding algorithm as in the parsing phase, and has the same time complexity. Note that the validity of a parser state can be calculated in O(1) by pre-calculating", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 136, |
| "end": 144, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "min i=p,\u2022\u2022\u2022 ,r\u2227y i \u0338 =\u22121 y i , max i=p,\u2022\u2022\u2022 ,r\u2227y i \u0338 =\u22121 y i , min i=r,\u2022\u2022\u2022 ,q\u22121\u2227y i \u0338 =\u22121 y i , and max i=r,\u2022\u2022\u2022 ,q\u22121\u2227y i \u0338 =\u22121 y i for all r for the span [p, q)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "when it is popped from the stack.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Algorithm", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We assume that each word x i in a sentence has three attributes: word surface form x w i , part-ofspeech (POS) tag x p i and word class x c i (Section 4.1 explains how x p i and x c i are obtained). Table 2 lists the features generated for the node which is created by splitting the span [p, q) with the action (r, o). o' is the node type of the parent node, d \u2208 {left, right} indicates whether this node is the left-hand-side or the right-hand-side child of the parent node, and Balance(p, q, r) re-", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 199, |
| "end": 206, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Input: Training data {\u27e8x l , y l \u27e9} L\u22121", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "l=0 , number of iterations T , beam width k. Output: Feature weights \u039b.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "1:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u039b \u2190 0 2: for t := 0, \u2022 \u2022 \u2022 , T \u2212 1 do 3: for l := 0, \u2022 \u2022 \u2022 , L \u2212 1 do 4: S0 \u2190 {\u27e8[[0, |x l |)], [], 0, true\u27e9} 5: for i := 1, \u2022 \u2022 \u2022 , |x l | \u2212 1 do 6: S \u2190 {} 7:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "foreach s \u2208 Si\u22121 do 8:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "S \u2190 S \u222a \u03c4 x l ,\u039b,y l (s) 9:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Si \u2190 T op k (S) 10:\u015d \u2190 argmax s\u2208S Score(s) 11:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "s * \u2190 argmax s\u2208S\u2227V alid(s) Score(s) 12:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "if s * / \u2208 Si then 13: break // Early update. 14:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "if\u015d \u0338 = s * then 15: 2012, and the additional feature templates are extended features that we introduce in this study. The top-down parser is fast, and allows us to use a larger number of features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u039b \u2190 \u039b + \u03a6(x l , s * ) \u2212 \u03a6(x l ,\u015d) 16: return \u039b", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In order to make the feature generation efficient, the attributes of all the words are converted to their 64-bit hash values beforehand, and concatenating the attributes is executed not as string manipulation but as faster integer calculation to generate a hash value by merging two hash values. The hash values are used as feature names. Therefore, when accessing feature weights stored in a hash table using the feature names as keys, the keys can be used as their hash values. This technique is different from the hashing trick (Ganchev and Dredze, 2008) which directly uses hash values as indices, and no noticeable differences in accuracy were observed by using this technique.", |
| "cite_spans": [ |
| { |
| "start": 531, |
| "end": 557, |
| "text": "(Ganchev and Dredze, 2008)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "As described in Section 3.2, each training example has y which represents correct word positions after reordering. However, only word alignment data is generally available, and we need to convert it to y. Let A i denote the set of indices of the targetside words which are aligned to the source-side word x i . We define an order relation between two words: Then, we sort x using the order relation and assign the position of x i in the sorted result to y i . If there are two words x i and x j in x which satisfy neither", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data for Preordering", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "x i \u2264 x j \u21d4 \u2200a \u2208 A i \\ A j , \u2200b \u2208 A j a \u2264 b \u2227 \u2200a \u2208 A i , \u2200b \u2208 A j \\ A i a \u2264 b. (4) Baseline Feature Template o(q \u2212 p), oBalance(p, q, r), ox w p\u22121 , ox w p , ox w r\u22121 , ox w r , ox w q\u22121 , ox w q , ox w p x w q\u22121 , ox w r\u22121 x w r , ox p p\u22121 , ox p p , ox p r\u22121 , ox p r , ox p q\u22121 , ox p q , ox p p x p q\u22121 , ox p r\u22121 x p r , ox c p\u22121 , ox c p , ox c r\u22121 , ox c r , ox c q\u22121 , ox c q , ox c p x c q\u22121 , ox c r\u22121 x c r . Additional Feature Template o min(r \u2212 p, 5) min(q \u2212 r, 5), oo \u2032 , oo \u2032 d, ox w p\u22121 x w p , ox w p x w r\u22121 , ox w p x w r , ox w r\u22121 x w q\u22121 , ox w r x w q\u22121 , ox w q\u22121 x w q , ox w r\u22122 x w r\u22121 x w r , ox w p x w r\u22121 x w r , ox w r\u22121 x w r x w q\u22121 , ox w r\u22121 x w r x w r+1 , ox w p x w r\u22121 x w r x w q\u22121 , oo \u2032 dx w p , oo \u2032 dx w r\u22121 , oo \u2032 dx w r , oo \u2032 dx w q\u22121 , oo \u2032 dx w p x w q\u22121 , ox p p\u22121 x p p , ox p p x p r\u22121 , ox p p x p r , ox p r\u22121 x p q\u22121 , ox p r x p q\u22121 , ox p q\u22121 x p q , ox p r\u22122 x p r\u22121 x p r , ox p p x p r\u22121 x p r , ox p r\u22121 x p r x p q\u22121 , ox p r\u22121 x p r x p r+1 , ox p p x p r\u22121 x p r x p q\u22121 , oo \u2032 dx p p , oo \u2032 dx p r\u22121 , oo \u2032 dx p r , oo \u2032 dx p q\u22121 , oo \u2032 dx p p x p q\u22121 , ox c p\u22121 x c p , ox c p x c r\u22121 , ox c p x c r , ox c r\u22121 x c q\u22121 , ox c r x c q\u22121 , ox c q\u22121 x c q , ox c r\u22122 x c r\u22121 x c r , ox c p x c r\u22121 x c r , ox c r\u22121 x c r x c q\u22121 , ox c r\u22121 x c r x c r+1 , ox c p x c r\u22121 x c r x c q\u22121 , oo \u2032 dx c p , oo \u2032 dx c r\u22121 , oo \u2032 dx c r , oo \u2032 dx c q\u22121 , oo \u2032 dx c p x c q\u22121 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data for Preordering", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "x i \u2264 x j nor x j \u2264 x i (that is,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data for Preordering", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "x does not make a totally ordered set with the order relation), then x cannot be sorted, and the example is removed from the training data. \u22121 is assigned to the words which do not have aligned target words. Two words x i and x j are regarded to have the same position if", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data for Preordering", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "x i \u2264 x j and x j \u2264 x i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data for Preordering", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The quality of training data is important to make accurate preordering systems, but automatically word-aligned data by EM algorithms tend to have many wrong alignments. We use forceddecoding in order to make training data for preordering. Given a parallel sentence pair and a phrase table, forced-decoding tries to translate the source sentence to the target sentence, and produces phrase alignments. We train the parameters for forced-decoding using the same parallel data used for training the final translation system. Infrequent phrase translations are pruned when the phrase table is created, and forced-decoding does not always succeed for the parallel sentences in the training data. Forced-decoding tends to succeed for shorter sentences, and the phrase-alignment data obtained by forced-decoding is biased to contain more shorter sentences. Therefore, we apply the following processing for the output of forceddecoding to make training data for preordering:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data for Preordering", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "1. Remove sentences which contain less than 3 or more than 50 words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data for Preordering", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "2. Remove sentences which contain less than 3 phrase alignments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data for Preordering", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "3. Remove sentences if they contain word 5grams which appear in other sentences in order to drop boilerplates. 4. Lastly, randomly resample sentences from the pool of filtered sentences to make the distribution of the sentence lengths follow a normal distribution with the mean of 20 and the standard deviation of 8. The parameters were determined from randomly sampled sentences from the Web.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data for Preordering", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We conduct experiments for 12 language pairs: Dutch (nl)-English (en), en-nl, en-French (fr), en-Japanese (ja), en-Spanish (es), fr-en, Hindi (hi)-en, ja-en, Korean (ko)-en, Turkish (tr)-en, Urdu (ur)en and Welsh (cy)-en. We use a phrase-based statistical machine translation system which is similar to (Och and Ney, 2004) . The decoder adopts the regular distance distortion model, and also incorporates a maximum entropy based lexicalized phrase reordering model (Zens and Ney, 2006) . The distortion limit is set to 5 words. Word alignments are learned using 3 iterations of IBM Model-1 (Brown et al., 1993) and 3 iterations of the HMM alignment model (Vogel et al., 1996) . Lattice-based minimum error rate training (MERT) (Macherey et al., 2008) is applied to optimize feature weights. 5gram language models trained on sentences collected from various sources are used.", |
| "cite_spans": [ |
| { |
| "start": 303, |
| "end": 322, |
| "text": "(Och and Ney, 2004)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 465, |
| "end": 485, |
| "text": "(Zens and Ney, 2006)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 582, |
| "end": 610, |
| "text": "Model-1 (Brown et al., 1993)", |
| "ref_id": null |
| }, |
| { |
| "start": 655, |
| "end": 675, |
| "text": "(Vogel et al., 1996)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 727, |
| "end": 750, |
| "text": "(Macherey et al., 2008)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The translation system is trained with parallel sentences automatically collected from the Web. The parallel data for each language pair consists of around 400 million source and target words. In order to make the development data for MERT and test data (3,000 and 5,000 sentences respectively for each language), we created parallel sentences by randomly collecting English sentences from the Web, and translating them by humans into each language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As an evaluation metric for translation quality, BLEU (Papineni et al., 2002) is used. As intrinsic evaluation metrics for preordering, Fuzzy Reordering Score (FRS) (Talbot et al., 2011 ) and Kendall's \u03c4 (Kendall, 1938; Birch et al., 2010; Isozaki et al., 2010) are used. Let \u03c1 i denote the position in the input sentence of the (i+1)-th token in a preordered word sequence excluding unaligned words in the gold-standard evaluation data. For Table 4 : Performance of preordering for various training data. Bold BLEU scores indicate no statistically significant difference at p < 0.05 from the best system (Koehn, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 77, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 165, |
| "end": 185, |
| "text": "(Talbot et al., 2011", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 192, |
| "end": 219, |
| "text": "Kendall's \u03c4 (Kendall, 1938;", |
| "ref_id": null |
| }, |
| { |
| "start": 220, |
| "end": 239, |
| "text": "Birch et al., 2010;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 240, |
| "end": 261, |
| "text": "Isozaki et al., 2010)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 605, |
| "end": 618, |
| "text": "(Koehn, 2004)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 442, |
| "end": 449, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "example, the preordering result \"New York I to went\" for the gold-standard data in Figure 5 has \u03c1 = 3, 4, 2, 1. Then FRS and \u03c4 are calculated as follows:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 83, |
| "end": 91, |
| "text": "Figure 5", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "F RS = B |\u03c1| + 1 ,", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "B = |\u03c1|\u22122 \u2211 i=0 \u03b4(y \u03c1 i =y \u03c1 i+1 \u2228 y \u03c1 i +1=y \u03c1 i+1 ) + \u03b4(y \u03c1 0 =0) + \u03b4(y \u03c1 |\u03c1|\u22121 = max i y i ), (6) \u03c4 = \u2211 |\u03c1|\u22122 i=0 \u2211 |\u03c1|\u22121 j=i+1 \u03b4(y \u03c1 i \u2264 y \u03c1 j ) 1 2 |\u03c1|(|\u03c1| \u2212 1) ,", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where \u03b4(X) is the Kronecker's delta function which returns 1 if X is true or 0 otherwise. These scores are calculated for each sentence, and are averaged over all sentences in test data. As above, FRS can be calculated as the precision of word bigrams (B is the number of the word bigrams which exist both in the system output and the gold standard data). This formulation is equivalent to the original formulation based on chunk fragmentation by Talbot et al. (2011) . Equation (6) takes into account the positions of the beginning and the ending words (Neubig et al., 2012 ). Kendall's \u03c4 is equivalent to the (normalized) crossing alignment link score used by Genzel (2010) . We prepared three types of training data for learning model parameters of BTG-based preordering:", |
| "cite_spans": [ |
| { |
| "start": 447, |
| "end": 467, |
| "text": "Talbot et al. (2011)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 554, |
| "end": 574, |
| "text": "(Neubig et al., 2012", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 662, |
| "end": 675, |
| "text": "Genzel (2010)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Manual-8k Manually word-aligned 8,000 sen-tence pairs. EM-10k, EM-100k These are the data obtained with the EM-based word alignment learning. From the word alignment result for phrase translation extraction described above, 10,000 and 100,000 sentence pairs were randomly sampled. Before the sampling, the data filtering procedure 1 and 3 in Section 3.4 were applied, and also sentences were removed if more than half of source words do not have aligned target words. Word alignment was obtained by symmetrizing source-to-target and target-tosource word alignment with the INTERSEC-TION heuristic. 5 Forced-10k, Forced-100k These are 10,000 and 100,000 word-aligned sentence pairs obtained with forced-decoding as described in Section 3.4.", |
| "cite_spans": [ |
| { |
| "start": 598, |
| "end": 599, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As test data for intrinsic evaluation of preordering, we manually word-aligned 2,000 sentence pairs for en-ja and ja-en. Several preordering systems were prepared in order to compare the following six systems:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "No-Preordering This is a system without preordering. Manual-Rules This system uses the preordering method based on manually created rules (Xu et al., 2009) . We made 43 precedence rules for en-ja, and 24 for ja-en. Auto-Rules This system uses the rule-based preordering method which automatically learns the rules from word-aligned data using the Variant 1 learning algorithm described in (Genzel, 2010) . 27 to 36 rules were automatically learned for each language pair. Classifier This system uses the preordering method based on statistical classifiers (Lerner and Petrov, 2013) , and the 2-step algorithm was implemented. Lader This system uses Latent Derivation Reorderer (Neubig et al., 2012) , which is a BTG-based preordering system using the CYK algorithm. 6 The basic feature templates in Table 2 are used as features. Top-Down This system uses the preordering system described in Section 3.", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 155, |
| "text": "et al., 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 389, |
| "end": 403, |
| "text": "(Genzel, 2010)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 556, |
| "end": 581, |
| "text": "(Lerner and Petrov, 2013)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 677, |
| "end": 698, |
| "text": "(Neubig et al., 2012)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 766, |
| "end": 767, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 799, |
| "end": 806, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Among the six systems, Manual-Rules, Auto-Rules and Classifier need dependency parsers for source languages. A dependency parser based on the shift-reduce algorithm with beam search (Zhang and Nivre, 2011) is used. The dependency parser and all the preordering systems need POS taggers. A supervised POS tagger based on conditional random fields (Lafferty et al., 2001 ) trained with manually POS annotated data is used for nl, en, fr, ja and ko. For other languages, we use a POS tagger based on POS projection (T\u00e4ckstr\u00f6m 6 lader 0.1.4. http://www.phontron.com/lader/ et al., 2013) which does not need POS annotated data. Word classes in Table 2 are obtained by using Brown clusters (Koo et al., 2008 ) (the number of classes is set to 256). For both Lader and Top-Down, the beam width is set to 20, and the number of training iterations of online learning is set to 20. The CPU time shown in this paper is measured using Intel Xeon 3.20GHz with 32GB RAM. Table 3 shows the training time and preordering speed together with the intrinsic evaluation metrics. In this experiment, both Top-Down and Lader were trained using the EM-100k data. Compared to Lader, Top-Down was faster: more than 20 times in training, and more than 10 times in preordering. Top-down had higher preordering accuracy in FRS and \u03c4 for en-ja. Although Lader uses sophisticated loss functions, Top-Down uses a larger number of features.", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 205, |
| "text": "(Zhang and Nivre, 2011)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 346, |
| "end": 368, |
| "text": "(Lafferty et al., 2001", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 512, |
| "end": 524, |
| "text": "(T\u00e4ckstr\u00f6m 6", |
| "ref_id": null |
| }, |
| { |
| "start": 684, |
| "end": 701, |
| "text": "(Koo et al., 2008", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 639, |
| "end": 646, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 957, |
| "end": 964, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments 4.1 Experimental Settings", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Top-Down (Basic feats.) is the top-down method using only the basic feature templates in Table 2 . It was much faster but less accurate than Top-Down using the additional features. Top-Down (Basic feats.) and Lader use exactly the same features. However, there are differences in the two systems, and they had different accuracies. Top-Down uses the beam search-based top-down method for parsing and the Passive-Aggressive algorithm for parameter estimation, and Lader uses the CYK algorithm with cube pruning and an on-line SVM algorithm. Especially, Lader optimizes FRS in the default setting, and it may be the reason that Lader had higher FRS.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 89, |
| "end": 96, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training and Preordering Speed", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Various Training Data Table 4 shows the preordering accuracy and BLEU scores when Top-Down was trained with various data. The best BLEU score for Top-Down was obtained by using manually annotated data for enja and 100k forced-decoding data for ja-en. The performance was improved by increasing the data size. Table 5 shows the BLEU score of each system for 12 language pairs. Some blank fields mean that the results are unavailable due to the lack of rules or dependency parsers. For all the language pairs, Top-Down had higher BLEU scores than Lader. For ja-en and ur-en, using Forced-100k instead of EM-100k for Top-Down improved the BLEU scores by more than 0.6, but it did not always improved.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 22, |
| "end": 29, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 309, |
| "end": 316, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Performance of Preordering for", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Manual-Rules performed the best for en-ja, but it needs manually created rules and is difficult to be applied to many language pairs. Auto-Rules and Classifier had higher scores than No-Preordering except for fr-en, but cannot be applied to the languages with no available dependency parsers. Top-Down (Forced-100k) can be applied to any language, and had statistically significantly better BLEU scores than No-Preordering, Manual-Rules, Auto-Rules, Classifier and Lader for 7 language pairs (en-fr, fr-en, hi-en, ja-en, ko-en, tr-en and ur-en), and similar performance for other language pairs except for en-ja, without dependency parsers trained with manually annotated data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "End-to-End Evaluation for Various Language Pairs", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "In all the experiments so far, the decoder was allowed to reorder even after preordering was carried out. In order to see the performance without reordering after preordering, we conducted experiments by setting the distortion limit to 0. Table 6 shows the results. The effect of the distortion limits varies for language pairs and preordering methods. The BLEU scores of Top-Down were not affected largely even when relying only on preordering.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 239, |
| "end": 246, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "End-to-End Evaluation for Various Language Pairs", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "In this paper, we proposed a top-down BTG parsing method for preordering. The method incrementally builds parse trees by splitting larger spans into smaller ones. The method provides an easy way to check the validity of each parser state, which allows us to use early update for latent variable Perceptron with beam search. In the experiments, it was shown that the top-down parsing method is more than 10 times faster than a CYKbased method. The top-down method had better BLEU scores for 7 language pairs without relying on supervised syntactic parsers compared to other preordering methods. Future work includes developing a bottom-up BTG parser with latent variables, and comparing the results to the top-down parser.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In(Neubig et al., 2012), the positions of such words were fixed by heuristics. In this study, the positions are not fixed, and all the possibilities are considered by latent variables.4 Although the simple Perceptron algorithm is used for explanation, we actually used the Passive Aggressive algorithm(Crammer et al., 2006) with the parameter averaging technique(Freund and Schapire, 1999).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In our preliminary experiments, the UNION and GROW-DIAG-FINAL heuristics were also applied to generate the training data for preordering, but INTERSECTION performed the best.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Building a Web-Based Parallel Corpus and Filtering Out Machine-Translated Text", |
| "authors": [ |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Antonova", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexey", |
| "middle": [], |
| "last": "Misyurev", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web", |
| "volume": "", |
| "issue": "", |
| "pages": "136--144", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexandra Antonova and Alexey Misyurev. 2011. Building a Web-Based Parallel Corpus and Filtering Out Machine-Translated Text. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, pages 136-144.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Metrics for MT Evaluation: Evaluating Reordering. Machine Translation", |
| "authors": [ |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "Miles", |
| "middle": [], |
| "last": "Osborne", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "24", |
| "issue": "", |
| "pages": "15--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexandra Birch, Miles Osborne, and Phil Blunsom. 2010. Metrics for MT Evaluation: Evaluating Re- ordering. Machine Translation, 24(1):15-26.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Pattern matching for permutations. Information Processing Letters", |
| "authors": [ |
| { |
| "first": "Prosenjit", |
| "middle": [], |
| "last": "Bose", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [ |
| "F" |
| ], |
| "last": "Buss", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Lubiw", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "65", |
| "issue": "", |
| "pages": "277--283", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Prosenjit Bose, Jonathan F. Buss, and Anna Lubiw. 1998. Pattern matching for permutations. Informa- tion Processing Letters, 65(5):277-283.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "Della" |
| ], |
| "last": "Vincent", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "A" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "L" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "19", |
| "issue": "", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263-311.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Hierarchical Phrase-Based Translation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "2", |
| "pages": "201--228", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Chiang. 2007. Hierarchical Phrase-Based Translation. Computational Linguistics, 33(2):201- 228.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Incremental Parsing with the Perceptron Algorithm", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Roark", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "111--118", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Brian Roark. 2004. Incremental Parsing with the Perceptron Algorithm. In Proceed- ings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 111-118.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Clause Restructuring for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivona", |
| "middle": [], |
| "last": "Kucerova", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "531--540", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause Restructuring for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Lin- guistics, pages 531-540.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Shai Shalev-Shwartz, and Yoram Singer", |
| "authors": [ |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ofer", |
| "middle": [], |
| "last": "Dekel", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Keshet", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "7", |
| "issue": "", |
| "pages": "551--585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. On- line Passive-Aggressive Algorithms. Journal of Ma- chine Learning Research, 7:551-585.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Inducing Sentence Structure from Parallel Corpora for Reordering", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Denero", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "193--203", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John DeNero and Jakob Uszkoreit. 2011. Inducing Sentence Structure from Parallel Corpora for Re- ordering. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Process- ing, pages 193-203.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Large Margin Classification Using the Perceptron Algorithm", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Freund", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "E" |
| ], |
| "last": "Schapire", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Machine Learning", |
| "volume": "37", |
| "issue": "", |
| "pages": "277--296", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Freund and Robert E. Schapire. 1999. Large Margin Classification Using the Perceptron Algo- rithm. Machine Learning, 37(3):277-296.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Small Statistical Models by Random Feature Mixing", |
| "authors": [ |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the ACL-08: HLT Workshop on Mobile Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "19--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kuzman Ganchev and Mark Dredze. 2008. Small Sta- tistical Models by Random Feature Mixing. In Pro- ceedings of the ACL-08: HLT Workshop on Mobile Language Processing, pages 19-20.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Automatically Learning Source-side Reordering Rules for Large Scale Machine Translation", |
| "authors": [ |
| { |
| "first": "Dmitriy", |
| "middle": [], |
| "last": "Genzel", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "376--384", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dmitriy Genzel. 2010. Automatically Learning Source-side Reordering Rules for Large Scale Ma- chine Translation. In Proceedings of the 23rd Inter- national Conference on Computational Linguistics, pages 376-384.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "An Efficient Algorithm for Easy-first Non-directional Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "742--750", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Goldberg and Michael Elhadad. 2010. An Ef- ficient Algorithm for Easy-first Non-directional De- pendency Parsing. In Human Language Technolo- gies: The 2010 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 742-750.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Structured Perceptron with Inexact Search", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Suphan", |
| "middle": [], |
| "last": "Fayong", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "142--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured Perceptron with Inexact Search. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142-151.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Automatic Evaluation of Translation Quality for Distant Language Pairs", |
| "authors": [ |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Isozaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Tsutomu", |
| "middle": [], |
| "last": "Hirao", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Duh", |
| "suffix": "" |
| }, |
| { |
| "first": "Katsuhito", |
| "middle": [], |
| "last": "Sudoh", |
| "suffix": "" |
| }, |
| { |
| "first": "Hajime", |
| "middle": [], |
| "last": "Tsukada", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "944--952", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic Evaluation of Translation Quality for Distant Lan- guage Pairs. In Proceedings of the 2010 Confer- ence on Empirical Methods in Natural Language Processing, pages 944-952.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Source-side Preordering for Translation using Logistic Regression and Depthfirst Branch-and-Bound Search", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Jehl", |
| "suffix": "" |
| }, |
| { |
| "first": "Adri\u00e0", |
| "middle": [], |
| "last": "De Gispert", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Hopkins", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Byrne", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "239--248", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laura Jehl, Adri\u00e0 de Gispert, Mark Hopkins, and Bill Byrne. 2014. Source-side Preordering for Translation using Logistic Regression and Depth- first Branch-and-Bound Search. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 239-248.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "A New Measure of Rank Correlation", |
| "authors": [ |
| { |
| "first": "Maurice", |
| "middle": [ |
| "G" |
| ], |
| "last": "Kendall", |
| "suffix": "" |
| } |
| ], |
| "year": 1938, |
| "venue": "Biometrika", |
| "volume": "30", |
| "issue": "1", |
| "pages": "81--93", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maurice G. Kendall. 1938. A New Measure of Rank Correlation. Biometrika, 30(1/2):81-93.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Source reordering using MaxEnt classifiers and supertags", |
| "authors": [ |
| { |
| "first": "Maxim", |
| "middle": [], |
| "last": "Khalilov", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 14th Annual Conference of the European Association for Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maxim Khalilov and Khalil Sima'an. 2010. Source reordering using MaxEnt classifiers and supertags. In Proceedings of the 14th Annual Conference of the European Association for Machine Translation.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Statistical Phrase-Based Translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [ |
| "Josef" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "48--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Pro- ceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 48-54.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Statistical Significance Tests for Machine Translation Evaluation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "388--395", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Nat- ural Language Processing, pages 388-395.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Simple Semi-supervised Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "595--603", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple Semi-supervised Dependency Pars- ing. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 595-603.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 18th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "282--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Fields: Prob- abilistic Models for Segmenting and Labeling Se- quence Data. In Proceedings of the 18th Interna- tional Conference on Machine Learning, pages 282- 289.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Source-Side Classifier Preordering for Machine Translation", |
| "authors": [ |
| { |
| "first": "Uri", |
| "middle": [], |
| "last": "Lerner", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "513--523", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Uri Lerner and Slav Petrov. 2013. Source-Side Clas- sifier Preordering for Machine Translation. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 513- 523.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "A Probabilistic Approach to Syntax-based Reordering for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Chi-Ho", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Minghui", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Dongdong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mu", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Guan", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chi-Ho Li, Minghui Li, Dongdong Zhang, Mu Li, Ming Zhou, and Yi Guan. 2007. A Probabilistic Approach to Syntax-based Reordering for Statisti- cal Machine Translation. In Proceedings of the 45th", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Annual Meeting of the Association of Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "720--727", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Meeting of the Association of Computational Linguistics, pages 720-727.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Easy-First POS Tagging and Dependency Parsing with Beam Search", |
| "authors": [ |
| { |
| "first": "Ji", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingbo", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tong", |
| "middle": [], |
| "last": "Xiao", |
| "suffix": "" |
| }, |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "110--114", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ji Ma, Jingbo Zhu, Tong Xiao, and Nan Yang. 2013. Easy-First POS Tagging and Dependency Parsing with Beam Search. In Proceedings of the 51st An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 110- 114.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Lattice-based Minimum Error Rate Training for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Macherey", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Ignacio", |
| "middle": [], |
| "last": "Thayer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "725--734", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wolfgang Macherey, Franz Och, Ignacio Thayer, and Jakob Uszkoreit. 2008. Lattice-based Minimum Error Rate Training for Statistical Machine Trans- lation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Process- ing, pages 725-734.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Pre-Reordering for Machine Translation Using Transition-Based Walks on Dependency Parse Trees", |
| "authors": [ |
| { |
| "first": "Miceli", |
| "middle": [], |
| "last": "Valerio Antonio", |
| "suffix": "" |
| }, |
| { |
| "first": "Giuseppe", |
| "middle": [], |
| "last": "Barone", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Attardi", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 8th Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "164--169", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Valerio Antonio Miceli Barone and Giuseppe Attardi. 2013. Pre-Reordering for Machine Translation Us- ing Transition-Based Walks on Dependency Parse Trees. In Proceedings of the 8th Workshop on Sta- tistical Machine Translation, pages 164-169.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "A Discriminative Reordering Parser for IWSLT 2013", |
| "authors": [ |
| { |
| "first": "Hwidong", |
| "middle": [], |
| "last": "Na", |
| "suffix": "" |
| }, |
| { |
| "first": "Jong-Hyeok", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 10th International Workshop for Spoken Language Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "83--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hwidong Na and Jong-Hyeok Lee. 2013. A Dis- criminative Reordering Parser for IWSLT 2013. In Proceedings of the 10th International Workshop for Spoken Language Translation, pages 83-86.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Inducing a Discriminative Parser to Optimize Machine Translation Reordering", |
| "authors": [ |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| }, |
| { |
| "first": "Taro", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| }, |
| { |
| "first": "Shinsuke", |
| "middle": [], |
| "last": "Mori", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "843--853", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graham Neubig, Taro Watanabe, and Shinsuke Mori. 2012. Inducing a Discriminative Parser to Optimize Machine Translation Reordering. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 843-853.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Incrementality in Deterministic Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together", |
| "volume": "", |
| "issue": "", |
| "pages": "50--57", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joakim Nivre. 2004. Incrementality in Deterministic Dependency Parsing. In Proceedings of the Work- shop on Incremental Parsing: Bringing Engineering and Cognition Together, pages 50-57.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "The Alignment Template Approach to Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Computational Linguistics", |
| "volume": "30", |
| "issue": "4", |
| "pages": "417--449", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och and Hermann Ney. 2004. The Align- ment Template Approach to Statistical Machine Translation. Computational Linguistics, 30(4):417- 449.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "BLEU: A Method for Automatic Evaluation of Machine Translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, pages 311-318.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "A Classifier-Based Parser with Linear Run-Time Complexity", |
| "authors": [ |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 9th International Workshop on Parsing Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "125--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenji Sagae and Alon Lavie. 2005. A Classifier-Based Parser with Linear Run-Time Complexity. In Pro- ceedings of the 9th International Workshop on Pars- ing Technology, pages 125-132.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy", |
| "authors": [ |
| { |
| "first": "Francesco", |
| "middle": [], |
| "last": "Sartorio", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "135--144", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Francesco Sartorio, Giorgio Satta, and Joakim Nivre. 2013. A Transition-Based Dependency Parser Us- ing a Dynamic Parsing Strategy. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics, pages 135-144.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Latent Variable Perceptron Algorithm for Structured Classification", |
| "authors": [ |
| { |
| "first": "Xu", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Takuya", |
| "middle": [], |
| "last": "Matsuzaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Daisuke", |
| "middle": [], |
| "last": "Okanohara", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 21st International Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "1236--1242", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xu Sun, Takuya Matsuzaki, Daisuke Okanohara, and Jun'ichi Tsujii. 2009. Latent Variable Perceptron Algorithm for Structured Classification. In Proceed- ings of the 21st International Joint Conference on Artificial Intelligence, pages 1236-1242.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Token and Type Constraints for Cross-Lingual Part-of-Speech Tagging", |
| "authors": [ |
| { |
| "first": "Oscar", |
| "middle": [], |
| "last": "T\u00e4ckstr\u00f6m", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association of Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and Type Constraints for Cross-Lingual Part-of-Speech Tagging. Transactions of the Association of Compu- tational Linguistics, 1:1-12.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "A Lightweight Evaluation Framework for Machine Translation Reordering", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Talbot", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideto", |
| "middle": [], |
| "last": "Kazawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiroshi", |
| "middle": [], |
| "last": "Ichikawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Katz-Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Masakazu", |
| "middle": [], |
| "last": "Seno", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 6th Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "12--21", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Talbot, Hideto Kazawa, Hiroshi Ichikawa, Ja- son Katz-Brown, Masakazu Seno, and Franz J. Och. 2011. A Lightweight Evaluation Framework for Machine Translation Reordering. In Proceedings of the 6th Workshop on Statistical Machine Trans- lation, pages 12-21.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "A Unigram Orientation Model for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Christoph", |
| "middle": [], |
| "last": "Tillman", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "101--104", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christoph Tillman. 2004. A Unigram Orientation Model for Statistical Machine Translation. In Pro- ceedings of the 2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (Short Papers), pages 101-104.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Learning Linear Ordering Problems for Better Translation", |
| "authors": [ |
| { |
| "first": "Roy", |
| "middle": [], |
| "last": "Tromble", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1007--1016", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roy Tromble and Jason Eisner. 2009. Learning Linear Ordering Problems for Better Translation. In Pro- ceedings of the 2009 Conference on Empirical Meth- ods in Natural Language Processing, pages 1007- 1016.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Large Scale Parallel Document Mining for Machine Translation", |
| "authors": [ |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Jay", |
| "middle": [ |
| "M" |
| ], |
| "last": "Ponte", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashok", |
| "middle": [ |
| "C" |
| ], |
| "last": "Popat", |
| "suffix": "" |
| }, |
| { |
| "first": "Moshe", |
| "middle": [], |
| "last": "Dubiner", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1101--1109", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jakob Uszkoreit, Jay M. Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. Large Scale Parallel Docu- ment Mining for Machine Translation. In Proceed- ings of the 23rd International Conference on Com- putational Linguistics, pages 1101-1109.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Syntax Based Reordering with Automatically Derived Rules for Improved Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Karthik", |
| "middle": [], |
| "last": "Visweswariah", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiri", |
| "middle": [], |
| "last": "Navratil", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Sorensen", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1119--1127", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karthik Visweswariah, Jiri Navratil, Jeffrey Sorensen, Vijil Chenthamarakshan, and Nandakishore Kamb- hatla. 2010. Syntax Based Reordering with Au- tomatically Derived Rules for Improved Statistical Machine Translation. In Proceedings of the 23rd In- ternational Conference on Computational Linguis- tics, pages 1119-1127.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "A Word Reordering Model for Improved Machine Translation", |
| "authors": [ |
| { |
| "first": "Karthik", |
| "middle": [], |
| "last": "Visweswariah", |
| "suffix": "" |
| }, |
| { |
| "first": "Rajakrishnan", |
| "middle": [], |
| "last": "Rajkumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Ankur", |
| "middle": [], |
| "last": "Gandhe", |
| "suffix": "" |
| }, |
| { |
| "first": "Ananthakrishnan", |
| "middle": [], |
| "last": "Ramanathan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiri", |
| "middle": [], |
| "last": "Navratil", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "486--496", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karthik Visweswariah, Rajakrishnan Rajkumar, Ankur Gandhe, Ananthakrishnan Ramanathan, and Jiri Navratil. 2011. A Word Reordering Model for Im- proved Machine Translation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 486-496.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "HMM-based Word Alignment in Statistical Translation", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| }, |
| { |
| "first": "Christoph", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 16th Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "836--841", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based Word Alignment in Statistical Translation. In Proceedings of the 16th Conference on Computational Linguistics, pages 836-841.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Computational Linguistics", |
| "volume": "23", |
| "issue": "3", |
| "pages": "377--403", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Cor- pora. Computational Linguistics, 23(3):377-403.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Improving a Statistical MT System with Automatically Learned Rewrite Patterns", |
| "authors": [ |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Mccord", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 20th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "508--514", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fei Xia and Michael McCord. 2004. Improving a Statistical MT System with Automatically Learned Rewrite Patterns. In Proceedings of the 20th Inter- national Conference on Computational Linguistics, pages 508-514.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages", |
| "authors": [ |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaeho", |
| "middle": [], |
| "last": "Kang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ringgaard", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "245--253", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peng Xu, Jaeho Kang, Michael Ringgaard, and Franz Och. 2009. Using a Dependency Parser to Im- prove SMT for Subject-Object-Verb Languages. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 245-253.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Statistical Dependency Analysis with Support Vector Machines", |
| "authors": [ |
| { |
| "first": "Hiroyasu", |
| "middle": [], |
| "last": "Yamada", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 8th International Workshop on Parsing Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "195--206", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Sta- tistical Dependency Analysis with Support Vector Machines. In Proceedings of the 8th International Workshop on Parsing Technologies, pages 195-206.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "A Ranking-based Approach to Word Reordering for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mu", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Dongdong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nenghai", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "912--920", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nan Yang, Mu Li, Dongdong Zhang, and Nenghai Yu. 2012. A Ranking-based Approach to Word Reorder- ing for Statistical Machine Translation. In Proceed- ings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 912-920.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Max-Violation Perceptron and Forced Decoding for Scalable MT Training", |
| "authors": [ |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Haitao", |
| "middle": [], |
| "last": "Mi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1112--1123", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heng Yu, Liang Huang, Haitao Mi, and Kai Zhao. 2013. Max-Violation Perceptron and Forced Decod- ing for Scalable MT Training. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1112-1123.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Discriminative Reordering Models for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings on the Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "55--63", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Zens and Hermann Ney. 2006. Discriminative Reordering Models for Statistical Machine Transla- tion. In Proceedings on the Workshop on Statistical Machine Translation, pages 55-63.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Transition-based Dependency Parsing with Rich Non-local Features", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "188--193", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based Dependency Parsing with Rich Non-local Features. In Proceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics: Short Pa- pers, pages 188-193.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "An example of preordering.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "Bracketing transduction grammar.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "text": "Top-down BTG parsing.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "text": "", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "num": null, |
| "text": "Top-down BTG parsing with beam search.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF5": { |
| "num": null, |
| "text": "An example of word reordering with ambiguities. y \u2032 is a separable permutation, and [p, r) and [r, q) are separable permutations. Therefore, s \u2032 is valid if Condition (3) holds.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF6": { |
| "num": null, |
| "text": "A training algorithm for latent variable Perceptron with beam search. turns a value among {'<', '=', '>'} according to the relation of the lengths of [p, r) and [r, q). The baseline feature templates are those used by Neubig et al.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF0": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "Parser states in top-down parsing.", |
| "html": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "Feature templates.", |
| "html": null |
| }, |
| "TABREF3": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td>en-ja</td><td/><td/><td>ja-en</td></tr><tr><td/><td>FRS</td><td>\u03c4</td><td>BLEU</td><td>FRS</td><td>\u03c4</td><td>BLEU</td></tr><tr><td>Top-Down</td><td colspan=\"2\">(Manual-8k) 81.57 90.44</td><td>18.13</td><td colspan=\"2\">79.26 86.47</td><td>14.26</td></tr><tr><td/><td colspan=\"2\">(EM-10k) 74.79 85.87</td><td>17.07</td><td colspan=\"2\">72.51 82.65</td><td>14.55</td></tr><tr><td/><td colspan=\"2\">(EM-100k) 77.83 87.78</td><td>17.66</td><td colspan=\"2\">74.60 83.78</td><td>14.84</td></tr><tr><td/><td colspan=\"2\">(Forced-10k) 76.10 87.45</td><td>16.98</td><td colspan=\"2\">75.36 83.96</td><td>14.78</td></tr><tr><td colspan=\"3\">(Forced-100k) 78.76 89.22</td><td>17.88</td><td colspan=\"2\">76.58 85.25</td><td>15.54</td></tr><tr><td>Lader</td><td colspan=\"2\">(EM-100k) 75.41 86.85</td><td>17.40</td><td colspan=\"2\">74.89 82.15</td><td>14.59</td></tr><tr><td>No-Preordering</td><td colspan=\"2\">46.17 65.07</td><td>13.80</td><td colspan=\"2\">59.35 65.30</td><td>10.31</td></tr><tr><td>Manual-Rules</td><td colspan=\"2\">80.59 90.30</td><td>18.68</td><td colspan=\"2\">73.65 81.72</td><td>14.02</td></tr><tr><td>Auto-Rules</td><td colspan=\"2\">64.13 84.17</td><td>16.80</td><td colspan=\"2\">60.60 75.49</td><td>12.59</td></tr><tr><td>Classifier</td><td colspan=\"2\">80.89 90.61</td><td>18.53</td><td colspan=\"2\">74.24 82.83</td><td>13.90</td></tr></table>", |
| "text": "Speed and accuracy of preordering.", |
| "html": null |
| }, |
| "TABREF5": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td>Distortion Limit</td><td>No-Preordering</td><td colspan=\"3\">Manual-Auto-Classifier Rules Rules</td><td colspan=\"3\">Lader (EM-100k) (EM-100k) (Forced-100k) Top-Down Top-Down</td></tr><tr><td>en-ja</td><td>5</td><td>13.80</td><td>18.68</td><td>16.80</td><td>18.53</td><td>17.40</td><td>17.66</td><td>17.88</td></tr><tr><td>en-ja</td><td>0</td><td>11.99</td><td>18.34</td><td>16.87</td><td>18.31</td><td>16.95</td><td>17.36</td><td>17.88</td></tr><tr><td>ja-en</td><td>5</td><td>10.31</td><td>14.02</td><td>12.59</td><td>13.90</td><td>14.59</td><td>14.84</td><td>15.54</td></tr><tr><td>ja-en</td><td>0</td><td>10.03</td><td>12.43</td><td>11.33</td><td>13.09</td><td>14.38</td><td>14.72</td><td>15.34</td></tr></table>", |
| "text": "BLEU score comparison.", |
| "html": null |
| }, |
| "TABREF6": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "BLEU scores for different distortion limits.", |
| "html": null |
| } |
| } |
| } |
| } |