| { |
| "paper_id": "D11-1020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:34:07.353837Z" |
| }, |
| "title": "A Novel Dependency-to-String Model for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "junxie@ict.ac.cn" |
| }, |
| { |
| "first": "Haitao", |
| "middle": [], |
| "last": "Mi", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "htmi@ict.ac.cn" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "liuqun@ict.ac.cn" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Dependency structure, as a first step towards semantics, is believed to be helpful to improve translation quality. However, previous works on dependency structure based models typically resort to insertion operations to complete translations, which make it difficult to specify ordering information in translation rules. In our model of this paper, we handle this problem by directly specifying the ordering information in head-dependents rules which represent the source side as head-dependents relations and the target side as strings. The head-dependents rules require only substitution operation, thus our model requires no heuristics or separate ordering models of the previous works to control the word order of translations. Large-scale experiments show that our model performs well on long distance reordering, and outperforms the stateof-the-art constituency-to-string model (+1.47 BLEU on average) and hierarchical phrasebased model (+0.46 BLEU on average) on two Chinese-English NIST test sets without resort to phrases or parse forest. For the first time, a source dependency structure based model catches up with and surpasses the state-of-theart translation models.", |
| "pdf_parse": { |
| "paper_id": "D11-1020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Dependency structure, as a first step towards semantics, is believed to be helpful to improve translation quality. However, previous works on dependency structure based models typically resort to insertion operations to complete translations, which make it difficult to specify ordering information in translation rules. In our model of this paper, we handle this problem by directly specifying the ordering information in head-dependents rules which represent the source side as head-dependents relations and the target side as strings. The head-dependents rules require only substitution operation, thus our model requires no heuristics or separate ordering models of the previous works to control the word order of translations. Large-scale experiments show that our model performs well on long distance reordering, and outperforms the stateof-the-art constituency-to-string model (+1.47 BLEU on average) and hierarchical phrasebased model (+0.46 BLEU on average) on two Chinese-English NIST test sets without resort to phrases or parse forest. For the first time, a source dependency structure based model catches up with and surpasses the state-of-theart translation models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Dependency structure represents the grammatical relations that hold between the words in a sentence. It encodes semantic relations directly, and has the best inter-lingual phrasal cohesion properties (Fox, 2002) . Those attractive characteristics make it pos-sible to improve translation quality by using dependency structures.", |
| "cite_spans": [ |
| { |
| "start": 200, |
| "end": 211, |
| "text": "(Fox, 2002)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Some researchers pay more attention to use dependency structure on the target side. (Shen et al., 2008 ) presents a string-to-dependency model, which restricts the target side of each hierarchical rule to be a well-formed dependency tree fragment, and employs a dependency language model to make the output more grammatically. This model significantly outperforms the state-of-the-art hierarchical phrasebased model (Chiang, 2005) . However, those stringto-tree systems run slowly in cubic time (Huang et al., 2006) .", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 102, |
| "text": "(Shen et al., 2008", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 416, |
| "end": 430, |
| "text": "(Chiang, 2005)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 495, |
| "end": 515, |
| "text": "(Huang et al., 2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Using dependency structure on the source side is also a promising way, as tree-based systems run much faster (linear time vs. cubic time, see (Huang et al., 2006) ). Conventional dependency structure based models (Lin, 2004; Quirk et al., 2005; Ding and Palmer, 2005; Xiong et al., 2007) typically employ both substitution and insertion operation to complete translations, which make it difficult to specify ordering information directly in the translation rules. As a result, they have to resort to either heuristics (Lin, 2004; Xiong et al., 2007) or separate ordering models (Quirk et al., 2005; Ding and Palmer, 2005) to control the word order of translations.", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 162, |
| "text": "(Huang et al., 2006)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 213, |
| "end": 224, |
| "text": "(Lin, 2004;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 225, |
| "end": 244, |
| "text": "Quirk et al., 2005;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 245, |
| "end": 267, |
| "text": "Ding and Palmer, 2005;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 268, |
| "end": 287, |
| "text": "Xiong et al., 2007)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 518, |
| "end": 529, |
| "text": "(Lin, 2004;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 530, |
| "end": 549, |
| "text": "Xiong et al., 2007)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 578, |
| "end": 598, |
| "text": "(Quirk et al., 2005;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 599, |
| "end": 621, |
| "text": "Ding and Palmer, 2005)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we handle this problem by directly specifying the ordering information in headdependents rules that represent the source side as head-dependents relations and the target side as string. The head-dependents rules have only one substitution operation, thus we don't face the problems appeared in previous work and get rid of the heuristics and ordering model. To alleviate data sparseness problem, we generalize the lexicalized words in head-dependents relations with their corresponding categories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the following parts, we first describe the motivation of using head-dependents relations (Section 2). Then we formalize our grammar (Section 3), present our rule acquisition algorithm (Section 4), our model (Section 5) and decoding algorithm (Section 6). Finally, large-scale experiments (Section 7) show that our model exhibits good performance on long distance reordering, and outperforms the stateof-the-art tree-to-string model (+1.47 BLEU on average) and hierarchical phrase-based model (+0.46 BLEU on average) on two Chinese-English NIST test sets. For the first time, a source dependency tree based model catches up with and surpasses the stateof-the-art translation models. Each node is annotated with the part-of-speech (POS) of the related word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For convenience, we use the lexicon dependency grammar (Hellwig, 2006) which adopts a bracket representation to express a projective dependency structure. The dependency structure of Figure 1 (a) can be expressed as:", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 70, |
| "text": "(Hellwig, 2006)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 183, |
| "end": 191, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "((2010\u5e74) (FIFA) \u4e16\u754c\u676f) (\u5728(\u5357\u975e)) (\u6210\u529f) \u4e3e\u884c", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "where the lexicon in brackets represents the dependents, while the lexicon out the brackets is the head.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To construct the dependency structure of a sentence, the most important thing is to establish dependency relations and distinguish the head from the dependent. Here are some criteria (Zwicky, 1985; x2:", |
| "cite_spans": [ |
| { |
| "start": 183, |
| "end": 197, |
| "text": "(Zwicky, 1985;", |
| "ref_id": null |
| }, |
| { |
| "start": 198, |
| "end": 198, |
| "text": "", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "x2: x1: x1:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "x3 Where \"x 1 :\u4e16 \u754c \u676f\" and \"x 2 :\u5728\" indicate substitution sites which can be replaced by a subtree rooted at \"\u4e16\u754c\u676f\" and \"\u5728\" respectively. \"x 3 :AD\"indicates a substitution site that can be replaced by a subtree whose root has part-of-speech \"AD\". The underline denotes a leaf node. Hudson, 1990) for identifying a syntactic relation between a head and a dependent between a headdependent pair:", |
| "cite_spans": [ |
| { |
| "start": 280, |
| "end": 293, |
| "text": "Hudson, 1990)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. head determines the syntactic category of C, and can often replace C;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2. head determines the semantic category of C; dependent gives semantic specification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A head-dependents relation is composed of a head and all its dependents as shown in Figure 1 (b). Since all the head-dependent pairs satisfy criteria 1 and 2, we can deduce that a head-dependents relation L holds the property that the head determines the syntactic and semantic categories of L, and can often replace L. Therefore, we can recur-sively replace the bottom level head-dependent relations of a dependency structure with their heads until the root. This implies an representation of the generation of a dependency structure on the basis of head-dependents relation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 84, |
| "end": 92, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Inspired by this, we represent the translation rules of our dependency-to-string model on the foundation of head-dependents relations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "3 Dependency-to-String Grammar Figure 1 (c) and (d) show two examples of the translation rules used in our dependency-to-string model. The former is an example of head-dependent rules that represent the source side as head-dependents relations and act as both translation rules and reordering rules. The latter is an example of head rules which are used for translating words.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 51, |
| "text": "(d)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 31, |
| "end": 39, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Formally, a dependency-to-string grammar is defined as a tuple \u27e8\u03a3, N, \u2206, R\u27e9, where \u03a3 is a set of source language terminals, N is a set of categories for the terminals in \u03a3 , \u2206 is a set of target language terminals, and R is a set of translation rules. A rule r in R is a tuple \u27e8t, s, \u03d5\u27e9, where:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "t is a node labeled by terminal from \u03a3; or a head-dependents relation of the source dependency structures, with each node labeled by a terminal from \u03a3 or a variable from a set X = {x 1 , x 2 , ...} constrained by a terminal from \u03a3 or a category from N ; s \u2208 (X \u222a \u2206) * is the target side string;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u03d5 is a one-to-one mapping from nonterminals in t to variables in s.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "For example, the head-dependents rule shown in Figure 1 (c) can be formalized as:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 47, |
| "end": 55, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "t = ((x 1 :\u4e16\u754c\u676f) (x 2 :\u5728) (x 3 :AD) \u4e3e\u884c) s = x 1 was held x 3 x 2 \u03d5 = {x 1 :\u4e16\u754c\u676f \u2194 x 1 , x 2 :\u5728 \u2194 x 2 , x 3 :AD \u2194 x 3 }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "where the underline indicates a leaf node, and x i :letters indicates a pair of variable and its constraint.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "A derivation is informally defined as a sequence of steps converting a source dependency structure into a target language string, with each step applying one translation rule. As an example, Figure 2 shows the derivation for translating a Chinese (CH) sentence into an English (EN) string.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "/P /P /NR /NR / /N NR R /NR /A AD D /AD 2010 /NT 2010 /NT FIFA/ /N NR R FIFA/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "(c) (d) (e) (f) (g) NR R R R R R D D D R R R N N N N N N N N R R r3: (2010 ) (FIFA) AE2010 FIFA World Cup", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Relation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EN 2010 FIFA World Cup was held successfully in South Africa", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CH 2010\u5e74 FIFA \u4e16\u754c\u676f \u5728 \u5357\u975e \u6210\u529f \u4e3e\u884c", |
| "sec_num": null |
| }, |
| { |
| "text": "The Chinese sentence (a) is first parsed into a dependency structure (b), which is converted into an English string in five steps. First, at the root node, we apply head-dependents rule r 1 shown in Figure 1 (c) to translate the top level head-dependents relation and result in three unfinished substructures and target string in (c). The rule is particular interesting since it captures the fact: in Chinese prepositional phrases and adverbs typically modify verbs on the left, whereas in English prepositional phrases and adverbs typically modify verbs on the right. Second, we use head rule r 2 translating \"\u6210\u529f\" into \"successfully\" and reach situation (d). Third, we apply headdependents rule r 3 translating the head-dependents relation rooted at \"\u4e16\u754c\u676f\" and yield (e). Fourth, head-dependents rules r 5 partially translate the subtree rooted at \"\u5728\" and arrive situation in (f). Finally, we apply head rule r 5 translating the residual node \"\u5357\u975e\" and obtain the final translation in (g).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 199, |
| "end": 208, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "CH 2010\u5e74 FIFA \u4e16\u754c\u676f \u5728 \u5357\u975e \u6210\u529f \u4e3e\u884c", |
| "sec_num": null |
| }, |
| { |
| "text": "The rule acquisition begins with a word-aligned corpus: a set of triples \u27e8T, S, A\u27e9, where T is a source dependency structure, S is a target side sentence, and A is an alignment relation between T and S. We extract from each triple \u27e8T, S, A\u27e9 head rules that are consistent with the word alignments and headdependents rules that satisfy the intuition that syntactically close items tend to stay close across languages. We accomplish the rule acquisition through three steps: tree annotation, head-dependents fragments identification and rule induction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule Acquisition", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Given a triple \u27e8T, S, A\u27e9 as shown in Figure 3 , we first annotate each node n of T with two attributes: head span and dependency span, which are defined as follows.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 37, |
| "end": 45, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Definition 1. Given a node n, its head span hsp(n) is a set of index of the target words aligned to n.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For example, hsp(2010\u5e74)={1, 5}, which corresponds to the target words \"2010\" and \"was\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Definition 2. A head span hsp(n) is consistent if it satisfies the following property: Figure 3: An annotated dependency structure. Each node is annotated with two spans, the former is head span and the latter dependency span. The nodes in acceptable head set are displayed in gray, and the nodes in acceptable dependent set are denoted by boxes. The triangle denotes the only acceptable head-dependents fragment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2200 n \u2032 \u0338 =n hsp(n \u2032 ) \u2229 hsp(n) = \u2205. /P {5,8}{9,10} /P {5,8}{9,10} /NR {3,4}{2-4} /NR {9,10}{9,10} /AD {7}{7} 2010 /NT {1,5}{} 2010 /NT {1,5}{} FIFA/NR {2,2}{2,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For example, hsp(\u5357\u975e) is consistent, while hsp(2010\u5e74) is not consistent since hsp(2010\u5e74) \u2229 hsp(\u5728) = 5. Definition 3. Given a head span hsp(n), its closure cloz(hsp(n)) is the smallest contiguous head span that is a superset of hsp(n).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For example, cloz(hsp(2010\u5e74)) = {1, 2, 3, 4, 5}, which corresponds to the target side word sequence \"2010 FIFA World Cup was\". For simplicity, we use {1-5} to denotes the contiguous span {1, 2, 3, 4, 5}. Definition 4. Given a subtree T \u2032 rooted at n, the dependency span dsp(n) of n is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "dsp(n) = cloz( \u222a n \u2032 \u2208T \u2032 hsp(n \u2032 ) is consistent hsp(n \u2032 )).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "If the head spans of all the nodes of T \u2032 is not consistent, dsp(n) = \u2205.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For example, since hsp(\u5728) is not consistent, dsp(\u5728)=dsp(\u5357\u975e)={9, 10}, which corresponds to the target words \"South\" and \"Africa\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The tree annotation can be accomplished by a single postorder transversal of T . The extraction of head rules from each node can be readily achieved with the same criteria as (Och and Ney, 2004) . In the following, we focus on head-dependents rules acquisition.", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 194, |
| "text": "(Och and Ney, 2004)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Annotation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We then identify the head-dependents fragments that are suitable for rule induction from the annotated dependency structure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To facilitate the identification process, we first define two sets of dependency structure related to head spans and dependency spans.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Definition 5. A acceptable head set ahs(T) of a dependency structure T is a set of nodes, each of which has a consistent head span.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For example, the elements of the acceptable head set of the dependency structure in Figure 3 are displayed in gray.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 84, |
| "end": 92, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Definition 6. A acceptable dependent set adt(T) of a dependency structure T is a set of nodes, each of which satisfies: dep(n) \u0338 = \u2205.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For example, the elements of the acceptable dependent set of the dependency structure in Figure 3 are denoted by boxes.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 89, |
| "end": 97, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Definition 7. We say a head-dependents fragments is acceptable if it satisfies the following properties:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "1. the root falls into acceptable head set;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "2. all the sinks fall into acceptable dependent set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "An acceptable head-dependents fragment holds the property that the head span of the root and the dependency spans of the sinks do not overlap with each other, which enables us to determine the reordering in the target side.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The identification of acceptable head-dependents fragments can be achieved by a single preorder transversal of the annotated dependency structure. For each accessed internal node n, we check whether the head-dependents fragment f rooted at n is acceptable. If f is acceptable, we output an acceptable head-dependents fragment; otherwise we access the next node.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Typically, each acceptable head-dependents fragment has three types of nodes: internal nodes, internal nodes of the dependency structure; leaf nodes, leaf nodes of the dependency structure; head node, a special internal node acting as the head of the related head-dependents relation. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Dependents Fragments Identification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "From each acceptable head-dependents fragment, we induce a set of lexicalized and unlexicalized head-dependents rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule Induction", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We induce a lexicalized head-dependents rule from an acceptable head-dependents fragment by the following procedure:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicalized Rule", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "1. extract the head-dependents relation and mark the internal nodes as substitution sites. This forms the input of a head-dependents rule;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicalized Rule", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "2. place the nodes in order according to the head span of the root and the dependency spans of the sinks, then replace the internal nodes with variables and the other nodes with the target words covered by their head spans. This forms the output of a head-dependents rule. Figure 4 shows an acceptable head-dependents fragment and a lexicalized head-dependents rule in-duced from it.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 273, |
| "end": 281, |
| "text": "Figure 4", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lexicalized Rule", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "Since head-dependents relations with verbs as heads typically consist of more than four nodes, employing only lexicalized head-dependents rules will result in severe sparseness problem. To alleviate this problem, we generalize the lexicalized headdependents rules and induce rules with unlexicalized nodes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unlexicalized Rules", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "As we know, the modification relation of a headdependents relation is determined by the edges. Therefore, we can replace the lexical word of each node with its categories (i.e. POS) and obtain new head-dependents relations with unlexicalized nodes holding the same modification relation. Here we call the lexicalized and unlexicalized head-dependents relations as instances of the modification relation. For a head-dependents relation with m node, we can produce 2 m \u2212 1 instances with unlexicalized nodes. Each instance represents the modification relation with a different specification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unlexicalized Rules", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "Based on this observation, from each lexicalized head-dependent rule, we generate new headdependents rules with unlexicalized nodes according to the following principles:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unlexicalized Rules", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "1. change the aligned part of the target string into a new variable when turning a head node or a leaf node into its category;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unlexicalized Rules", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "2. keep the target side unchanged when turning a internal node into its category.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unlexicalized Rules", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "Restrictions: Since head-dependents relations with verbs as heads typically consists of more than four nodes, enumerating all the instances will result in a massive grammar with too many kinds of rules and inflexibility in decoding. To alleviate these problems, we filter the grammar with the following principles:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unlexicalized Rules", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "1. nodes of the same type turn into their categories simultaneously.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unlexicalized Rules", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "2. as for leaf nodes, only those with open class words can be turned into their categories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unlexicalized Rules", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "In our experiments of this paper, we only turn those dependents with POS tag in the set of {CD,DT,OD,JJ,NN,NR,NT,AD,FW,PN} into their categories. Figure 5 : An illustration of rule generalization. Where \"x 1 :\u4e16 \u754c \u676f\" and \"x 2 :\u5728\" indicate substitution sites which can be replaced by a subtree rooted at \"\u4e16\u754c\u676f\" and \"\u5728\" respectively. \"x 3 :AD\"indicates a substitution site that can be replaced by a subtree whose root has partof-speech \"AD\". The underline denotes a leaf node. The box indicates the starting lexicalized head-dependents rule. Figure 5 illustrates the rule generalization process under these restrictions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 146, |
| "end": 154, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 538, |
| "end": 546, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Unlexicalized Rules", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "We handle the unaligned words of the target side by extending the head spans of the lexicalized head and leaf nodes on both left and right directions. This procedure is similar with the method of (Och and Ney, 2004 ) except that we might extend several spans simultaneously. In this process, we might obtain m(m \u2265 1) head-dependents rules from a headdependent fragment in handling unaligned words.", |
| "cite_spans": [ |
| { |
| "start": 196, |
| "end": 214, |
| "text": "(Och and Ney, 2004", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unaligned Words", |
| "sec_num": "4.3.3" |
| }, |
| { |
| "text": "Each of these rules is assigned with a fractional count 1/m.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unaligned Words", |
| "sec_num": "4.3.3" |
| }, |
| { |
| "text": "The rule acquisition is a three-step process, which is summarized in Algorithm 1. We take the extracted rule set as observed data and make use of relative frequency estimator to obtain the translation probabilities P (t|s) and P (s|t).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm for Rule Acquisition", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Following (Och and Ney, 2002) , we adopt a general log-linear model. Let d be a derivation that convert a source dependency structure T into a target string e. The probability of d is defined as:", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 29, |
| "text": "(Och and Ney, 2002)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The model", |
| "sec_num": "5" |
| }, |
| { |
| "text": "P (d) \u221d \u220f i \u03d5 i (d) \u03bb i (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The model", |
| "sec_num": "5" |
| }, |
| { |
| "text": "where \u03d5 i are features defined on derivations and \u03bb i are feature weights. In our experiments of this paper, we used seven features as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The model", |
| "sec_num": "5" |
| }, |
| { |
| "text": "-translation probabilities P (t|s) and P (s|t);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The model", |
| "sec_num": "5" |
| }, |
| { |
| "text": "-lexical translation probabilities P lex (t|s) and P lex (s|t);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The model", |
| "sec_num": "5" |
| }, |
| { |
| "text": "-rule penalty exp(\u22121); -language model P lm (e);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The model", |
| "sec_num": "5" |
| }, |
| { |
| "text": "-word penalty exp(|e|).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The model", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Our decoder is based on bottom up chart parsing. It finds the best derivation d * that convert the input dependency structure into a target string among all possible derivations D:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "6" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "d * = argmax d\u2208D P (D)", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Decoding", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Given a source dependency structure T , the decoder transverses T in post-order. For each accessed internal node n, it enumerates all instances of the related modification relation of the head-dependents relation rooted at n, and checks the rule set for matched translation rules. If there is no matched rule, we construct a pseudo translation rule according to the word order of the head-dependents relation. For example, suppose that we can not find any translation rule about to \"(2010\u5e74) (FIFA) \u4e16 \u754c\u676f\", we will construct a pseudo translation rule \"(x 1 :2010\u5e74) (x 2 :FIFA)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "6" |
| }, |
| { |
| "text": "x 3 :\u4e16\u754c\u676f \u2192 x 1 x 2 x 3 \".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "6" |
| }, |
| { |
| "text": "A larger translation is generated by substituting the variables in the target side of a translation rule with the translations of the corresponding dependents. We make use of cube pruning (Chiang, 2007; Huang and Chiang, 2007) to find the k-best items with integrated language model for each node.", |
| "cite_spans": [ |
| { |
| "start": 188, |
| "end": 202, |
| "text": "(Chiang, 2007;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 203, |
| "end": 226, |
| "text": "Huang and Chiang, 2007)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "6" |
| }, |
| { |
| "text": "To balance performance and speed, we prune the search space in several ways. First, beam thresh-old \u03b2 , items with a score worse than \u03b2 times of the best score in the same cell will be discarded; second, beam size b, items with a score worse than the bth best item in the same cell will be discarded. The item consist of the necessary information used in decoding. Each cell contains all the items standing for the subtree rooted at it. For our experiments, we set \u03b2 = 10 \u22123 and b = 300. Additionally, we also prune rules that have the same source side (b = 100).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We evaluated the performance of our dependencyto-string model by comparison with replications of the hierarchical phrase-based model and the tree-tostring models on Chinese-English translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our training corpus consists of 1.5M sentence pairs from LDC data, including LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data preparation", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "We parse the source sentences with Stanford Parser (Klein and Manning, 2003) into projective dependency structures, whose nodes are annotated by POS tags and edges by typed dependencies. In our implementation of this paper, we make use of the POS tags only.", |
| "cite_spans": [ |
| { |
| "start": 51, |
| "end": 76, |
| "text": "(Klein and Manning, 2003)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data preparation", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "We obtain the word alignments by running GIZA++ (Och and Ney, 2003) on the corpus in both directions and applying \"grow-diag-and\" refinement (Koehn et al., 2003) .", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 67, |
| "text": "(Och and Ney, 2003)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 141, |
| "end": 161, |
| "text": "(Koehn et al., 2003)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data preparation", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "We apply SRI Language Modeling Toolkit (Stolcke, 2002) to train a 4-gram language model with modified Kneser-Ney smoothing on the Xinhua portion of the Gigaword corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data preparation", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "We use NIST MT Evaluation test set 2002 as our development set, NIST MT Evaluation test set 2004 (MT04) and 2005 (MT05) as our test set. The quality of translations is evaluated by the case insensitive NIST BLEU-4 metric (Papineni et al., 2002) . 1 We make use of the standard MERT (Och, 2003) to tune the feature weights in order to maximize the system's BLEU score on the development set. Table 1 : Statistics of the extracted rules on training corpus and the BLEU scores on the test sets. Where \"+\" means dep2str significantly better than cons2str with p < 0.01.", |
| "cite_spans": [ |
| { |
| "start": 221, |
| "end": 244, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 247, |
| "end": 248, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 282, |
| "end": 293, |
| "text": "(Och, 2003)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 391, |
| "end": 398, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data preparation", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "We take a replication of Hiero (Chiang, 2007) as the hierarchical phrase-based model baseline. In our experiments of this paper, we set the beam size b = 200 and the beam threshold \u03b2 = 0. The maximum initial phrase length is 10. We use constituency-to-string model (Liu et al., 2006) as the syntax-based model baseline which make use of composed rules (Galley et al., 2006) without handling the unaligned words. In our experiments of this paper, we set the tatTable-limit=20, tatTable-threshold=10 \u22121 , stack-limit=100, stack-threshold=10 \u22121 ,hight-limit=3, and length-limit=7.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 45, |
| "text": "(Chiang, 2007)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 265, |
| "end": 283, |
| "text": "(Liu et al., 2006)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 352, |
| "end": 373, |
| "text": "(Galley et al., 2006)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The baseline models", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "We display the results of our experiments in Table 1 . Our dependency-to-string model dep2str significantly outperforms its constituency structure-based counterpart (cons2str) with +1.27 and +1.68 BLEU on MT04 and MT05 respectively. Moreover, without resort to phrases or parse forest, dep2str surpasses the hierarchical phrase-based model (hierore) over +0.53 and +0.4 BLEU on MT04 and MT05 respectively on the basis of a 62% smaller rule set.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 45, |
| "end": 53, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "Furthermore, We compare some actual translations generated by cons2str, hiero-re and dep2str. Figure 6 shows two translations of our test sets MT04 and MT05, which are selected because each holds a long distance dependency commonly used in Chinese.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 94, |
| "end": 102, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "In the first example, the Chinese input holds a complex long distance dependencies \"\u5df4 \u5c3c \u8036 \u5728... \u4e0e...\u540e \u8868 \u793a\". This dependency corresponds to sentence pattern \"noun+prepostional phrase+prepositional phrase+verb\", where the former prepositional phrase specifies the position and the latter specifies the time. Both cons2str and hiero-re are confused by this sentence and mistak-enly treat \"\u9c8d \u5c14(Powell)\" as the subjective, thus result in translations with different meaning from the source sentence. Conversely, although \"\u5728\" is falsely translated into a comma, dep2str captures this complex dependency and translates it into \"After ... ,(should be at) Barnier said\", which accords with the reordering of the reference.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "In the second example, the Chinese input holds a long distance dependency \"\u4e2d \u56fd \u8d5e \u8d4f ... \u52aa \u529b\" which corresponds to a simple pattern \"noun phrase+verb+noun phrase\". However, due to the modifiers of \"\u52aa \u529b\" which contains two subsentences including 24 words, the sentence looks rather complicated. Cons2str and hiero-re fail to capture this long distance dependency and provide monotonic translations which do not reflect the meaning of the source sentence. In contrast, dep2str successfully captures this long distance dependency and translates it into \"China appreciates efforts of ...\", which is almost the same with the reference \"China appreciates the efforts of ...\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "All these results prove the effectiveness of our dependency-to-string model in both translation and long distance reordering. We believe that the advantage of dep2str comes from the characteristics of dependency structures tending to bring semantically related elements together (e.g., verbs become adjacent to all their arguments) and are better suited to lexicalized models (Quirk et al., 2005) . And the incapability of cons2str and hiero-re in handling long distance reordering of these sentences does not lie in the representation of translation rules but the compromises in rule extraction or decoding so as to balance the speed or grammar size and performance. The hierarchical phrase-based model prohibits any nonterminal X from spanning a substring longer than 10 on the source side to make the decoding algorithm asymptotically linear-time (Chiang, 2005) .", |
| "cite_spans": [ |
| { |
| "start": 376, |
| "end": 396, |
| "text": "(Quirk et al., 2005)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 850, |
| "end": 864, |
| "text": "(Chiang, 2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "While constituency structure-based models typically constrain the number of internal nodes (Galley et al., 2006) and/or the height (Liu et al., 2006) of translation rules so as to balance the grammar size and performance. Both strategies limit the ability of the models in processing long distance reordering of sentences with long and complex modification relations.", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 112, |
| "text": "(Galley et al., 2006)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 131, |
| "end": 149, |
| "text": "(Liu et al., 2006)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "As a first step towards semantics, dependency structures are attractive to machine translation. And many efforts have been made to incorporating this desirable knowledge into machine translation. (Lin, 2004; Quirk et al., 2005; Ding and Palmer, 2005; Xiong et al., 2007) make use of source dependency structures. (Lin, 2004) employs linear paths as phrases and view translation as minimal path covering. (Quirk et al., 2005 ) extends paths to treelets, arbitrary connected subgraphs of dependency structures, and propose a model based on treelet pairs. Both models require projection of the source dependency structure to the target side via word alignment, and thus can not handle non-isomorphism between languages. To alleviate this problem, (Xiong et al., 2007) presents a dependency treelet string correspondence model which directly map a dependency structure to a target string. (Ding and Palmer, 2005 ) presents a translation model based on Synchronous Dependency Insertion Grammar(SDIG), which handles some of the non-isomorphism but requires both source and target dependency structures. Most important, all these works do not specify the ordering information directly in translation rules, and resort to either heuristics (Lin, 2004; Xiong et al., 2007) or separate ordering models (Quirk et al., 2005; Ding and Palmer, 2005) to control the word order of translations. By comparison, our model requires only source dependency structure, and handles nonisomorphism and ordering problems simultaneously by directly specifying the ordering information in the head-dependents rules that represent the source side as head-dependents relations and the target side as strings. (Shen et al., 2008) exploits target dependency structures as dependency language models to ensure the grammaticality of the target string. (Shen et al., 2008 ) extends the hierarchical phrase-based model and present a string-to-dependency model, which employs string-to-dependency rules whose source side are string and the target as well-formed dependency structures. In contrast, our model exploits source dependency structures, as a tree-based system, it run much faster (linear time vs. cubic time, see (Huang et al., 2006) ).", |
| "cite_spans": [ |
| { |
| "start": 196, |
| "end": 207, |
| "text": "(Lin, 2004;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 208, |
| "end": 227, |
| "text": "Quirk et al., 2005;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 228, |
| "end": 250, |
| "text": "Ding and Palmer, 2005;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 251, |
| "end": 270, |
| "text": "Xiong et al., 2007)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 313, |
| "end": 324, |
| "text": "(Lin, 2004)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 404, |
| "end": 423, |
| "text": "(Quirk et al., 2005", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 744, |
| "end": 764, |
| "text": "(Xiong et al., 2007)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 885, |
| "end": 907, |
| "text": "(Ding and Palmer, 2005", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1232, |
| "end": 1243, |
| "text": "(Lin, 2004;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1244, |
| "end": 1263, |
| "text": "Xiong et al., 2007)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 1292, |
| "end": 1312, |
| "text": "(Quirk et al., 2005;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1313, |
| "end": 1335, |
| "text": "Ding and Palmer, 2005)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1680, |
| "end": 1699, |
| "text": "(Shen et al., 2008)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1819, |
| "end": 1837, |
| "text": "(Shen et al., 2008", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 2187, |
| "end": 2207, |
| "text": "(Huang et al., 2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "8" |
| }, |
| { |
| "text": "In this paper, we present a novel dependency-tostring model, which employs head-dependents rules that represent the source side as head-dependents relations and the target side as string. The headdependents rules specify the ordering information directly and require only substitution operation. Thus, our model does not need heuristics or ordering model of the previous works to control the word order of translations. Large scale experiments show that our model exhibits good performance in long distance reordering and outperforms the state-ofthe-art constituency-to-string model and hierarchical phrase-based model without resort to phrases and parse forest. For the first time, a source dependencybased model shows improvement over the state-ofthe-art translation models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "9" |
| }, |
| { |
| "text": "In our future works, we will exploit the semantic information encoded in the dependency structures which is expected to further improve the translations, and replace 1-best dependency structures with dependency forests so as to alleviate the influence caused by parse errors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "9" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by National Natural Science Foundation of China, Contract 60736014, 60873167, 90920004. We are grateful to the anonymous reviewers for their thorough reviewing and valuable suggestions. We appreciate Yajuan Lv, Wenbin Jiang, Hao Xiong, Yang Liu, Xinyan Xiao, Tian Xia and Yun Huang for the insightful advices in both experiments and writing. Special thanks goes to Qian Chen for supporting my pursuit all through.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": " Figure 6 : Actual translations produced by the baselines and our system. For our system, we also display the long distance dependencies correspondence in Chinese and English. Here we omit the edges irrelevant to the long distance dependencies.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1, |
| "end": 9, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A hierarchical phrase-based model for statistical machine translation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL 2005", |
| "volume": "", |
| "issue": "", |
| "pages": "263--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL 2005, pages 263-270.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Hierarchical phrase-based translation. Computational Linguistics", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Machine translation using probabilistic synchronous dependency insertion grammars", |
| "authors": [ |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL 2005", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuan Ding and Martha Palmer. 2005. Machine trans- lation using probabilistic synchronous dependency in- sertion grammars. In Proceedings of ACL 2005.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Phrasal cohesion and statistical machine translation", |
| "authors": [ |
| { |
| "first": "Heidi", |
| "middle": [ |
| "J" |
| ], |
| "last": "Fox", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of EMNLP 2002", |
| "volume": "", |
| "issue": "", |
| "pages": "304--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heidi J. Fox. 2002. Phrasal cohesion and statistical ma- chine translation. In In Proceedings of EMNLP 2002, pages 304-311.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Scalable inference and training of context-rich syntactic translation models", |
| "authors": [ |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Graehl", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Deneefe", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ignacio", |
| "middle": [], |
| "last": "Thayer", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of ACL 2006", |
| "volume": "", |
| "issue": "", |
| "pages": "961--968", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceed- ings of ACL 2006, pages 961-968, Sydney, Australia, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Parsing with dependency grammars", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Hellwig", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Dependenz und Valenz / Dependency and Valency", |
| "volume": "2", |
| "issue": "", |
| "pages": "1081--1109", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Hellwig. 2006. Parsing with dependency gram- mars. In Dependenz und Valenz / Dependency and Va- lency, volume 2, pages 1081-1109. Berlin, New York.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Forest rescoring: Faster decoding with integrated language models", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL 2007", |
| "volume": "", |
| "issue": "", |
| "pages": "144--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang and David Chiang. 2007. Forest rescor- ing: Faster decoding with integrated language models. In Proceedings of ACL 2007, pages 144-151, Prague, Czech Republic, June.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A syntax-directed translator with extended domain of locality", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Aravind", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang, Kevin Knight, and Aravind Joshi. 2006. A syntax-directed translator with extended domain of locality. In Proceedings of the Workshop on Computa- tionally Hard Problems and Joint Inference in Speech and Language Processing, pages 1-8, New York City, New York, June. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "English Word Grammar", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Hudson", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Hudson. 1990. English Word Grammar. Black- ell.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Fast exact inference with a factored model for natural language parsing", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Advances in Neural Information Processing Systems 15 (NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "3--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Klein and Christopher D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In In Advances in Neural Information Pro- cessing Systems 15 (NIPS, pages 3-10. MIT Press.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A path-based transfer model for machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "625--630", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, Edmonton, Canada, July. Dekang Lin. 2004. A path-based transfer model for machine translation. In Proceedings of Coling 2004, pages 625-630, Geneva, Switzerland, Aug 23-Aug 27.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Tree-tostring alignment template for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shouxun", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of ACL 2006", |
| "volume": "", |
| "issue": "", |
| "pages": "609--616", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-to- string alignment template for statistical machine trans- lation. In Proceedings of ACL 2006, pages 609-616, Sydney, Australia, July.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Discriminative training and maximum entropy models for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of 40th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimi- native training and maximum entropy models for sta- tistical machine translation. In Proceedings of 40th", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Annual Meeting of the Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "295--302", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 295-302, Philadelphia, Pennsylva- nia, USA, July.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A systematic comparison of various statistical alignment models", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Computational Linguistics", |
| "volume": "29", |
| "issue": "1", |
| "pages": "19--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, 29(1):19-51.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The alignment template approach to statistical machine translation", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Minimum error rate training in statistical machine translation", |
| "authors": [ |
| { |
| "first": "Franz Josef", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of ACL-2003", |
| "volume": "", |
| "issue": "", |
| "pages": "160--167", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL- 2003, pages 160-167, Sapporo, Japan, July.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Bleu: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of ACL 2002", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002, pages 311-318, Philadelphia, Pennsylva- nia, USA, July.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Dependency treelet translation: Syntactically informed phrasal smt", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Quirk", |
| "suffix": "" |
| }, |
| { |
| "first": "Arul", |
| "middle": [], |
| "last": "Menezes", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL 2005", |
| "volume": "", |
| "issue": "", |
| "pages": "271--279", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Quirk, Arul Menezes, and Colin Cherry. 2005. De- pendency treelet translation: Syntactically informed phrasal smt. In Proceedings of ACL 2005, pages 271- 279.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A new string-to-dependency machine translation algorithm with a target dependency language model", |
| "authors": [ |
| { |
| "first": "Libin", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinxi", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL 2008: HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "577--585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation al- gorithm with a target dependency language model. In Proceedings of ACL 2008: HLT, pages 577-585, Columbus, Ohio, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Srilm -an extensible language modeling toolkit", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of ICSLP", |
| "volume": "30", |
| "issue": "", |
| "pages": "901--904", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreas Stolcke. 2002. Srilm -an extensible language modeling toolkit. In Proceedings of ICSLP, volume 30, pages 901-904.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A dependency treelet string correspondence model for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Deyi", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shouxun", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "40--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Deyi Xiong, Qun Liu, and Shouxun Lin. 2007. A depen- dency treelet string correspondence model for statisti- cal machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 40-47, Prague, Czech Republic, June.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Examples of dependency structure (a), headdependents relation (b), head-dependents rule (r 1 of Figure 2) and head rule (d).", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "An example derivation of dependency-to-string translation. The dash lines indicate the reordering when employing a head-dependents rule.", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "text": "A lexicalized head-dependents rule (b) induced from the only acceptable head-dependents fragment (a) of Figure 3.", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Algorithm for Rule Acquisition Input: Source dependency structure T , target string S, alignment A Output: Translation rule set R 1 HSet \u2190 ACCEPTABLE HEAD(T ,S,A) 2 DSet \u2190 ACCEPTABLE DEPENDENT(T ,S,A) 3 for each node n \u2208 HSet do 4", |
| "uris": null |
| }, |
| "TABREF5": { |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "text": "ftp://jaguar.ncsl.nist.gov/mt/resources/mteval-v11b.pl", |
| "content": "<table><tr><td colspan=\"4\">System Rule # MT04(%) MT05(%)</td></tr><tr><td>cons2str</td><td>30M</td><td>34.55</td><td>31.94</td></tr><tr><td colspan=\"2\">hiero-re 148M</td><td>35.29</td><td>33.22</td></tr><tr><td>dep2str</td><td>56M</td><td>35.82 +</td><td>33.62 +</td></tr></table>" |
| } |
| } |
| } |
| } |