ACL-OCL / Base_JSON /prefixQ /json /Q13 /Q13-1024.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q13-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:08:48.192580Z"
},
"title": "Large-scale Word Alignment Using Soft Dependency Cohesion Constraints",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {}
},
"email": "zgwang@nlpr.ia.ac.cn"
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {}
},
"email": "cqzong@nlpr.ia.ac.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dependency cohesion refers to the observation that phrases dominated by disjoint dependency subtrees in the source language generally do not overlap in the target language. It has been verified to be a useful constraint for word alignment. However, previous work either treats this as a hard constraint or uses it as a feature in discriminative models, which is ineffective for large-scale tasks. In this paper, we take dependency cohesion as a soft constraint, and integrate it into a generative model for large-scale word alignment experiments. We also propose an approximate EM algorithm and a Gibbs sampling algorithm to estimate model parameters in an unsupervised manner. Experiments on large-scale Chinese-English translation tasks demonstrate that our model achieves improvements in both alignment quality and translation quality.",
"pdf_parse": {
"paper_id": "Q13-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "Dependency cohesion refers to the observation that phrases dominated by disjoint dependency subtrees in the source language generally do not overlap in the target language. It has been verified to be a useful constraint for word alignment. However, previous work either treats this as a hard constraint or uses it as a feature in discriminative models, which is ineffective for large-scale tasks. In this paper, we take dependency cohesion as a soft constraint, and integrate it into a generative model for large-scale word alignment experiments. We also propose an approximate EM algorithm and a Gibbs sampling algorithm to estimate model parameters in an unsupervised manner. Experiments on large-scale Chinese-English translation tasks demonstrate that our model achieves improvements in both alignment quality and translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word alignment is the task of identifying word correspondences between parallel sentence pairs. Word alignment has become a vital component of statistical machine translation (SMT) systems, since it is required by almost all state-of-the-art SMT systems for the purpose of extracting phrase tables or even syntactic transformation rules (Koehn et al., 2007; Galley et al., 2004) .",
"cite_spans": [
{
"start": 337,
"end": 357,
"text": "(Koehn et al., 2007;",
"ref_id": "BIBREF16"
},
{
"start": 358,
"end": 378,
"text": "Galley et al., 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "During the past two decades, generative word alignment models such as the IBM Models (Brown et al., 1993) and the HMM model (Vogel et al., 1996) have been widely used, primarily because they are trained on bilingual sentences in an unsupervised manner and the implementation is freely available in the GIZA++ toolkit (Och and Ney, 2003) . However, the word alignment quality of generative models is still far from satisfactory for SMT systems. In recent years, discriminative alignment models incorporating linguistically motivated features have become increasingly popular (Moore, 2005; Taskar et al., 2005; Riesa and Marcu, 2010; Saers et al., 2010; Riesa et al., 2011) . These models are usually trained with manually annotated parallel data. However, when moving to a new language pair, large amount of hand-aligned data are usually unavailable and expensive to create.",
"cite_spans": [
{
"start": 85,
"end": 105,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
},
{
"start": 124,
"end": 144,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF34"
},
{
"start": 317,
"end": 336,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF23"
},
{
"start": 574,
"end": 587,
"text": "(Moore, 2005;",
"ref_id": "BIBREF21"
},
{
"start": 588,
"end": 608,
"text": "Taskar et al., 2005;",
"ref_id": "BIBREF32"
},
{
"start": 609,
"end": 631,
"text": "Riesa and Marcu, 2010;",
"ref_id": "BIBREF27"
},
{
"start": 632,
"end": 651,
"text": "Saers et al., 2010;",
"ref_id": "BIBREF30"
},
{
"start": 652,
"end": 671,
"text": "Riesa et al., 2011)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A more practical way to improve large-scale word alignment quality is to introduce syntactic knowledge into a generative model and train the model in an unsupervised manner (Wu, 1997; Yamada and Knight, 2001; Lopez and Resnik, 2005; DeNero and Klein, 2007; Pauls et al., 2010) . In this paper, we take dependency cohesion (Fox, 2002) into account, which assumes phrases dominated by disjoint dependency subtrees tend not to overlap after translation. Instead of treating dependency cohesion as a hard constraint or using it as a feature in discriminative models (Cherry and Lin, 2006b) , we treat dependency cohesion as a distortion constraint, and integrate it into a modified HMM word alignment model to softly influence the probabilities of alignment candidates. We also propose an approximate EM algorithm and an explicit Gibbs sampling algorithm to train the model in an unsupervised manner. Experiments on a large-scale Chinese-English translation task demonstrate that our model achieves improvements in both word alignment quality and machine translation quality.",
"cite_spans": [
{
"start": 173,
"end": 183,
"text": "(Wu, 1997;",
"ref_id": "BIBREF35"
},
{
"start": 184,
"end": 208,
"text": "Yamada and Knight, 2001;",
"ref_id": "BIBREF38"
},
{
"start": 209,
"end": 232,
"text": "Lopez and Resnik, 2005;",
"ref_id": "BIBREF19"
},
{
"start": 233,
"end": 256,
"text": "DeNero and Klein, 2007;",
"ref_id": "BIBREF5"
},
{
"start": 257,
"end": 276,
"text": "Pauls et al., 2010)",
"ref_id": "BIBREF25"
},
{
"start": 322,
"end": 333,
"text": "(Fox, 2002)",
"ref_id": "BIBREF7"
},
{
"start": 562,
"end": 585,
"text": "(Cherry and Lin, 2006b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows: Section 2 introduces dependency cohesion constraint for word alignment. Section 3 presents our generative model for word alignment using dependency cohesion constraint. Section 4 describes algorithms for parameter estimation. We discuss and analyze the experiments in Section 5. Section 6 gives the related work. Finally, we conclude this paper and mention future work in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a source (foreign) sentence 1 = 1 , 2 , \u2026 , and a target (English) sentence 1 = 1 , 2 , \u2026 , , the alignment between 1 and 1 is defined as a subset of the Cartesian product of word positions: \u2208 {( , ): = 1, \u2026 , ; = 1, \u2026 , } When given the source side dependency tree , we can project dependency subtrees in onto the target sentence through the alignment . Dependency cohesion assumes projection spans of disjoint subtrees tend not to overlap. Let ( ) be the subtree of rooted at , we define two kinds of projection span for the node : subtree span and head span. The subtree span is the projection span of the total subtree ( ), while the head span is the projection span of the node itself. Following Fox (2002) and , we consider two types of dependency cohesion: headmodifier cohesion and modifier-modifier cohesion. Head-modifier cohesion is defined as the subtree span of a node does not overlap with the head span of its head (parent) node, while modifier-modifier cohesion is defined as subtree spans of two nodes under the same head node do not overlap each other. We call a situation where cohesion is not maintained crossing.",
"cite_spans": [
{
"start": 707,
"end": 717,
"text": "Fox (2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesion Constraint for Word Alignment",
"sec_num": "2"
},
{
"text": "Using the dependency tree in Figure 1 as an example, given the correct alignment \"R\", the subtree span of \"\u6709/have\" is [8, 14] , and the head span of its head node \"\u4e4b\u4e00/one of\" is [3, 4] . They do not overlap each other, so the head-modifier cohesion is maintained. Similarly, the subtree span of \"\u5c11\u6570/few\" is [6, 6] , and it does not overlap the subtree span of \"\u6709/have\", so a modifier-modifier cohesion is maintained. However, when \"R\" is replaced with the incorrect alignment \"W\", the subtree span of \"\u6709/have\" becomes [3, 14] , and it overlaps the head span of its head \"\u4e4b\u4e00/one of\", so a head-modifier crossing occurs. Meanwhile, the subtree spans of the two nodes \"\u6709/have\" and \" \u5c11\u6570/few\" overlap each other, so a modifiermodifier crossing occurs. Fox (2002) showed that dependency cohesion is generally maintained between English and French. To test how well this assumption holds between Chinese and English, we measure the dependency cohesion between the two languages with a manually annotated bilingual Chinese-English data set of 502 sentence pairs 1 . We use the headmodifier cohesion percentage (HCP) and the modifier-modifier cohesion percentage (MCP) to measure the degree of cohesion in the corpus. HCP (or MCP) is used for measuring how many headmodifier (or modifier-modifier) pairs are actually cohesive. Table 1 lists the relative percentages in both Chinese-to-English (ch-en, using Chinese side dependency trees) and English-to-Chinese (en-ch, using English side dependency trees) directions. As we see from Table 1 , dependency cohesion is 1 The data set is the development set used in Section 5. Figure 1: A Chinese-English sentence pair including the word alignments and the Chinese side dependency tree. The Chinese and English words are listed horizontally and vertically, respectively. The black grids are gold-standard alignments. For the Chinese word \"\u6709/have\", we give two alignment positions, where \"R\" is the correct alignment and \"W\" is the incorrect alignment.",
"cite_spans": [
{
"start": 118,
"end": 121,
"text": "[8,",
"ref_id": null
},
{
"start": 122,
"end": 125,
"text": "14]",
"ref_id": null
},
{
"start": 178,
"end": 181,
"text": "[3,",
"ref_id": null
},
{
"start": 182,
"end": 184,
"text": "4]",
"ref_id": null
},
{
"start": 307,
"end": 310,
"text": "[6,",
"ref_id": null
},
{
"start": 311,
"end": 313,
"text": "6]",
"ref_id": null
},
{
"start": 518,
"end": 521,
"text": "[3,",
"ref_id": null
},
{
"start": 522,
"end": 525,
"text": "14]",
"ref_id": null
},
{
"start": 747,
"end": 757,
"text": "Fox (2002)",
"ref_id": "BIBREF7"
},
{
"start": 1557,
"end": 1558,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1318,
"end": 1325,
"text": "Table 1",
"ref_id": null
},
{
"start": 1524,
"end": 1531,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dependency Cohesion Constraint for Word Alignment",
"sec_num": "2"
},
{
"text": "generally maintained between Chinese and English. So dependency cohesion would be helpful for word alignment between Chinese and English. However, there are still a number of crossings. If we restrict alignment space with a hard cohesion constraint, the correct alignments that result in crossings will be ruled out directly. In the next section, we describe an approach to integrating dependency cohesion constraint into a generative model to softly influence the probabilities of alignment candidates. We show that our new approach addresses the shortcomings of using dependency cohesion as a hard constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesion Constraint for Word Alignment",
"sec_num": "2"
},
{
"text": "The most influential generative word alignment models are the IBM Models 1-5 and the HMM model (Brown et al., 1993; Vogel et al., 1996; Och and Ney, 2003) . These models can be classified into sequence-based models (IBM Models 1, 2 and HMM) and fertility-based models (IBM Models 3, 4 and 5). The sequence-based model is easier to implement, and recent experiments have shown that appropriately modified sequence-based model can produce comparable performance with fertility-based models (Lopez and Resnik, 2005; Liang et al., 2006; DeNero and Klein, 2007; Zhao and Gildea, 2010; Bansal et al., 2011) . So we built a generative word alignment model with dependency cohesion constraint based on the sequence-based model.",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "(Brown et al., 1993;",
"ref_id": "BIBREF1"
},
{
"start": 116,
"end": 135,
"text": "Vogel et al., 1996;",
"ref_id": "BIBREF34"
},
{
"start": 136,
"end": 154,
"text": "Och and Ney, 2003)",
"ref_id": "BIBREF23"
},
{
"start": 488,
"end": 512,
"text": "(Lopez and Resnik, 2005;",
"ref_id": "BIBREF19"
},
{
"start": 513,
"end": 532,
"text": "Liang et al., 2006;",
"ref_id": "BIBREF17"
},
{
"start": 533,
"end": 556,
"text": "DeNero and Klein, 2007;",
"ref_id": "BIBREF5"
},
{
"start": 557,
"end": 579,
"text": "Zhao and Gildea, 2010;",
"ref_id": "BIBREF40"
},
{
"start": 580,
"end": 600,
"text": "Bansal et al., 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Generative Word Alignment Model with Dependency Cohesion Constraint",
"sec_num": "3"
},
{
"text": "According to Brown et al. (1993) and Och and Ney (2003) , the sequence-based model is built as a noisy channel model, where the source sentence 1 and the alignment 1 are generated conditioning on the target sentence 1 . The model assumes each source word is assigned to exactly one target word, and defines an asymmetric alignment for the sentence pair as 1 = 1 , 2 , \u2026 , , \u2026 , , where each \u2208 [0, ] is an alignment from the source position j to the target position , = 0 means is not aligned with any target words. The sequence-based model divides alignment procedure into two stages (distortion and translation) and factors as:",
"cite_spans": [
{
"start": 13,
"end": 32,
"text": "Brown et al. (1993)",
"ref_id": "BIBREF1"
},
{
"start": 37,
"end": 55,
"text": "Och and Ney (2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequence-based Alignment Model",
"sec_num": "3.1"
},
{
"text": "( 1 , 1 | 1 ) = \u220f ( | \u22121 , ) ( | ) =1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequence-based Alignment Model",
"sec_num": "3.1"
},
{
"text": "(1) where is the distortion model and is the translation model. IBM Models 1, 2 and the HMM model all assume the same translation model ( | ) . However, they use three different distortion models. IBM Model 1 assumes a uniform distortion probability 1/(I+1), IBM Model 2 assumes ( | ) that depends on word position j and HMM model assumes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequence-based Alignment Model",
"sec_num": "3.1"
},
{
"text": "( | \u22121 , )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequence-based Alignment Model",
"sec_num": "3.1"
},
{
"text": "that depends on the previous alignment \u22121 . Recently, tree distance models (Lopez and Resnik, 2005; DeNero and Klein, 2007) formulate the distortion model as",
"cite_spans": [
{
"start": 75,
"end": 99,
"text": "(Lopez and Resnik, 2005;",
"ref_id": "BIBREF19"
},
{
"start": 100,
"end": 123,
"text": "DeNero and Klein, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequence-based Alignment Model",
"sec_num": "3.1"
},
{
"text": "( | \u22121 , )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequence-based Alignment Model",
"sec_num": "3.1"
},
{
"text": ", where the distance between and \u22121 are calculated by walking through the phrase (or dependency) tree T.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequence-based Alignment Model",
"sec_num": "3.1"
},
{
"text": "To integrate dependency cohesion constraint into a generative model, we refine the sequence-based model in two ways with the help of the source side dependency tree .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "First, we design a new word alignment order. In the sequence-based model, source words are aligned from left to right by taking source sentence as a linear sequence. However, to apply dependency cohesion constraint, the subtree span of a head node is computed based on the alignments of its children, so children must be aligned before the head node. Riesa and Marcu (2010) propose a hierarchical search procedure to traverse all nodes in a phrase structure tree. Similarly, we define a bottom-up topological order (BUT-order) to traverse all words in the source side dependency tree . In the BUT-order, tree nodes are aligned bottom-up with as a backbone. For all children under the same head node, left children are aligned from right to left, and then right children are aligned from left to right. For example, the BUT-order for the following dependency tree is \"C B E F D A H G\". Table 1 : Cohesion percentages (%) of a manually annotated data set between Chinese and English.",
"cite_spans": [
{
"start": 351,
"end": 373,
"text": "Riesa and Marcu (2010)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 885,
"end": 892,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "For the sake of clarity, we define a function to map all nodes in into their BUT-order, and notate it as BUT( ) = 1 , 2 , \u2026 , , \u2026 , , where means the j-th node in BUT-order is the -th word in the original source sentence. We arrange alignment sequence 1 according the BUT-order and notate it as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "[1, ] = 1 , \u2026 , , \u2026 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": ", where is the aligned position for a node . We also notate the sub-sequence , \u2026 , as [ , ] .",
"cite_spans": [
{
"start": 86,
"end": 91,
"text": "[ , ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "Second, we keep the same translation model as the sequence-based model and integrate the dependency cohesion constraints into the distortion model. The main idea is to influence the distortion procedure with the dependency cohesion constraints. Assume node \u210e and node are a head-modifier pair in , where \u210e is the head and is the modifier. The head-modifier cohesion relationship between them is notated as \u210e, \u2208 { \u210e , } . When the head-modifier cohesion is maintained \u210e, = \u210e , otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "\u210e, =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "We represent the set of headmodifier cohesion relationships for all the headmodifier pairs in as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "= { \u210e, | \u210e \u2208 [1, ], \u2208 [1, ], \u210e \u2260 , \u210e and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "are a head-modifier pair in }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "The set of head-modifier cohesion relationships for all the head-modifier pairs taking \u210e as the head node can be represented as: Similarly, we assume node and node are a modifier-modifier pair in . To avoid repetition, we assume is the node sitting at the position after in BUT-order and call as the higherorder node of the pair. The modifier-modifier cohesion relationship between them is notated as . We represent the set of modifier-modifier cohesion relationships for all the modifier-modifier pairs in as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "\u210e = { \u210e, | \u2208 [1, ], \u2260",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "= { , | \u2208 [1, ], \u2208 [1, ], \u2260 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "and are a modifier-modifier pair in } The set of modifier-modifier cohesion relationships for all the modifier-modifier pairs taking as the higher-order node can be represented as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "= { , | \u2208 [1, ], \u2260 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "and are a modifier-modifier pair in } Obviously, = \u22c3 =0 . With the above notations, we formulate the distortion probability for a node as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "( , , | [1, \u22121] ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "According to Eq. (1) and the two improvements, we formulated our model as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "( 1 , [1, ] | 1 , ) = ( [1, ] , , , 1 , | 1 , ) \u2248 \u220f ( , , | [1, \u22121] ) ( | ) \u2208 ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "(2) Here, we use the approximation symbol, because the right hand side is not guaranteed to be normalized. In practice, we only compute ratios of these terms, so it is not actually a problem. Such model is called deficient (Brown et al., 1993) , and many successful unsupervised models are deficient, e.g., IBM model 3 and IBM model 4.",
"cite_spans": [
{
"start": 223,
"end": 243,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3.2"
},
{
"text": "We assume the distortion procedure is influenced by three factors: words distance, head-modifier cohesion and modifier-modifier cohesion. Therefore, we further decompose the distortion model into three terms as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( , , | [1, \u22121] ) = ( | [1, \u22121] ) ( | [1, ] ) ( | [1, ] , ) \u2248 ( | \u22121 , ) \u210e ( | [1, ] ) ( | [1, ] )",
"eq_num": "(3)"
}
],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "where is the words distance term, \u210e is the head-modifier cohesion term and is the modifier-modifier cohesion term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "The word distance term has been verified to be very useful in the HMM alignment model. However, in our model, the word distance is calculated based on the previous node in BUTorder rather than the previous word in the original sentence. We follow the HMM word alignment model (Vogel et al., 1996) and parameterize in terms of the jump width:",
"cite_spans": [
{
"start": 276,
"end": 296,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( | \u2032 , ) = ( \u2212 \u2032 ) \u2211 ( \u2032\u2032 \u2212 \u2032 ) \u2032\u2032",
"eq_num": "(4)"
}
],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "where (\uf09f) is the count of jump width.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "The head-modifier cohesion term \u210e is used to penalize the distortion probability according to relationships between the head node and its children (modifiers). Therefore, we define \u210e as the product of probabilities for all head-modifier pairs taking as head node:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "\u210e ( | [1, ] ) = \u220f \u210e ( , | , , )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": ", \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": ", \u2208 { \u210e ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "} is the headmodifier cohesion relationship between and one of its child , \u210e is the corresponding probability, and are the aligned words for and . Similarly, the modifier-modifier cohesion term is used to penalize the distortion probability according to relationships between and its siblings. Therefore, we define as the product of probabilities for all the modifier-modifier pairs taking as the higher-order node:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "( | [1, ] ) = \u220f ( , | , , )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": ", \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": ", \u2208 { \u210e ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "} is the modifiermodifier cohesion relationship between and one of its sibling , is the corresponding probability, and are the aligned words for and . Both \u210e and in Eq. (5) and Eq. (6) are conditioned on three words, which would make them very sparse. To cope with this problem, we use the word clustering toolkit, mkcls (Och et al., 1999) , to cluster all words into 50 classes, and replace the three words with their classes.",
"cite_spans": [
{
"start": 321,
"end": 339,
"text": "(Och et al., 1999)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Cohesive Distortion Model",
"sec_num": "3.3"
},
{
"text": "To align sentence pairs with the model in Eq. (2), we have to estimate some parameters: , , \u210e and . The traditional approach for sequencebased models uses Expectation Maximization (EM) algorithm to estimate parameters. However, in our model, it is hard to find an efficient way to sum over all the possible alignments, which is required in the E-step of EM algorithm. Therefore, we propose an approximate EM algorithm and a Gibbs sampling algorithm for parameter estimation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "4"
},
{
"text": "The approximate EM algorithm is similar to the training algorithm for fertility-based alignment models (Och and Ney, 2003) . The main idea is to enumerate only a small subset of good alignments in the E-step, then collect expectation counts and estimate parameters among the small subset in Mstep. Following with Och and Ney (2003) , we employ neighbor alignments of the Viterbi alignment as the small subset. Neighbor alignments are obtained by performing one swap or move operation over the Viterbi alignment.",
"cite_spans": [
{
"start": 103,
"end": 122,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF23"
},
{
"start": 313,
"end": 331,
"text": "Och and Ney (2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate EM Algorithm",
"sec_num": "4.1"
},
{
"text": "Obtaining the Viterbi alignment itself is not so easy for our model. Therefore, we take the Viterbi alignment of the sequence-based model (HMM model) as the starting point, and iterate the hillclimbing algorithm (Brown et al., 1993) many times to get the best alignment greedily. In each iteration, we find the best alignment with Eq. 2among neighbor alignments of the initial point, and then make the best alignment as the initial point for the next iteration. The algorithm iterates until no update could be made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate EM Algorithm",
"sec_num": "4.1"
},
{
"text": "Gibbs sampling is another effective algorithm for unsupervised learning problems. As is described in the literatures (Johnson et al., 2007; Gao and Johnson, 2008) , there are two types of Gibbs samplers: explicit and collapsed. An explicit sampler represents and samples the model parameters in addition to the word alignments, while in a collapsed sampler the parameters are integrated out and only alignments are sampled. Mermer and Sara\u00e7 lar (2011) proposed a collapsed sampler for IBM Model 1. However, their sampler updates parameters constantly and thus cannot run efficiently on large-scale tasks. Instead, we take advantage of explicit Gibbs sampling to make a highly parallelizable sampler. Our Gibbs sampler is similar to the MCMC algorithm in Zhao and Gildea (2010), but we assume Dirichlet priors when sampling model parameters and take a different sampling approach based on the source side dependency tree.",
"cite_spans": [
{
"start": 117,
"end": 139,
"text": "(Johnson et al., 2007;",
"ref_id": "BIBREF13"
},
{
"start": 140,
"end": 162,
"text": "Gao and Johnson, 2008)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling Algorithm",
"sec_num": "4.2"
},
{
"text": "Our sampler performs a sequence of consecutive iterations. Each iteration consists of two sampling steps. The first step samples the aligned position for each dependency node according to the BUTorder. Concretely, when sampling the aligned on iteration . Therefore, we sample the aligned position ( +1) as follows: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling Algorithm",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( +1) ~ ( | [1, \u22121] ( +1) , [ +1, ] ( ) , 1 , 1 ) = ( 1 ,\u0302| 1 ) \u2211 ( 1 ,\u0302| 1 ) \u2208{0,1,\u2026, }",
"eq_num": "(7)"
}
],
"section": "Gibbs Sampling Algorithm",
"sec_num": "4.2"
},
{
"text": ") calculated with Eq. 2, and the denominator is the summation of the probabilities of aligning with each target word. The second step of our sampler calculates parameters , , \u210e and using their counts, where all these counts can be easily collected during the first sampling step. Because all these parameters follow multinomial distributions, we consider Dirichlet priors for them, which would greatly simplify the inference procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling Algorithm",
"sec_num": "4.2"
},
{
"text": "In the first sampling step, all the sentence pairs are processed independently. So we can make this step parallel and process all the sentence pairs efficiently with multi-threads. When using the Gibbs sampler for decoding, we just ignore the second sampling step and iterate the first sampling step many times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gibbs Sampling Algorithm",
"sec_num": "4.2"
},
{
"text": "We performed a series of experiments to evaluate our model. All the experiments are conducted on the Chinese-English language pair. We employ two training sets: FBIS and LARGE. The size and source corpus of these training sets are listed in Table 2 . We will use the smaller training set FBIS to evaluate the characters of our model and use the LARGE training set to evaluate whether our model is adaptable for large-scale task. For word alignment quality evaluation, we take the handaligned data sets from SSMT2007 2 , which contains 505 sentence pairs in the testing set and 502 sentence pairs in the development set. Following Och and Ney (2003) , we evaluate word alignment quality with the alignment error rate (AER), where lower AER is better.",
"cite_spans": [
{
"start": 630,
"end": 648,
"text": "Och and Ney (2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 241,
"end": 248,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Because our model takes dependency trees as input, we parse both sides of the two training sets, the development set and the testing set with Berkeley parser (Petrov et al., 2006) , and then convert the generated phrase trees into dependency trees according to Wang and Zong (2010; 2011) . Our model is an asymmetric model, so we perform word alignment in both forward (Chinese\uf0e0English) and reverse (English\uf0e0Chinese) directions.",
"cite_spans": [
{
"start": 158,
"end": 179,
"text": "(Petrov et al., 2006)",
"ref_id": "BIBREF26"
},
{
"start": 261,
"end": 281,
"text": "Wang and Zong (2010;",
"ref_id": "BIBREF36"
},
{
"start": 282,
"end": 287,
"text": "2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In Eq. 3, the distortion probability is decomposed into three terms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of Cohesion Constraints",
"sec_num": "5.1"
},
{
"text": ", \u210e and . To study whether cohesion constraints are effective for word alignment, we construct four sub-models as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of Cohesion Constraints",
"sec_num": "5.1"
},
{
"text": "(1) wd: = ; (2) wd-hc: = \u2022 \u210e ; (3) wd-mc: = \u2022 ; (4) wd-hc-mc: = \u2022 \u210e \u2022 . We train these four models with the approximate EM and the Gibbs sampling algorithms on the FBIS training set. For approximate EM algorithm, we first train a HMM model (with 5 iterations of IBM model 1 and 5 iterations of HMM model), then train these four sub-models with 10 iterations of the approximate EM algorithm. For Gibbs sampling, we choose symmetric Dirichlet priors identically with all hyper-parameters equals 0.0001 to obtain a sparse Dirichlet prior. Then, we make the alignments produced by the HMM model as the initial points, and train these sub-models with 20 iterations of the Gibbs sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of Cohesion Constraints",
"sec_num": "5.1"
},
{
"text": "AERs on the development set are listed in Table 3 . We can easily find: 1) when employing the head-modifier cohesion constraint, the wd-hc model yields better AERs than the wd model; 2) when employing the modifier-modifier cohesion constraint, the wd-mc model also yields better AERs than the wd model; and 3) when employing both head-modifier cohesion constraint and modifier-modifier cohesion constraint together, the wd-hc-mc model yields the best AERs among the four sub-models. So both head-modifier cohesion constraint and modifier-modifier cohesion constraint are helpful for word alignment. Table 3 also shows that the approximate EM algorithm yields better AERs in the forward direction than reverse direction, while the Gibbs sampling algorithm yields close AERs in both directions.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 50,
"text": "Table 3",
"ref_id": "TABREF7"
},
{
"start": 600,
"end": 607,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Effectiveness of Cohesion Constraints",
"sec_num": "5.1"
},
{
"text": "To show the effectiveness of our model, we compare our model with some of the state-of-theart models. All the systems are listed as follows: 1) IBM4: The fertility-based model (IBM model 4) which is implemented in GIZA++ toolkit. The training scheme is 5 iterations of IBM model 1, 5 iterations of the HMM model and 10 iterations of IBM model 4. 2) IBM4-L0: A modification to the GIZA++ toolkit which extends IBM models with \u2113 0norm (Vaswani et al., 2012) . The training scheme is the same as IBM4. 3) IBM4-Prior: A modification to the GIZA++ toolkit which extends the translation model of IBM models with Dirichlet priors (Riley and Gildea, 2012) . The training scheme is the same as IBM4. 4) Agree-HMM: The HMM alignment model by jointly training the forward and reverse models (Liang et al., 2006) , which is implemented in the BerkeleyAligner. The training scheme is 5 iterations of jointly training IBM model 1 and 5 iterations of jointly training HMM model. 5) Tree-Distance: The tree distance alignment model proposed in DeNero and Klein (2007) , which is implemented in the BerkeleyAligner.",
"cite_spans": [
{
"start": 433,
"end": 455,
"text": "(Vaswani et al., 2012)",
"ref_id": "BIBREF33"
},
{
"start": 623,
"end": 647,
"text": "(Riley and Gildea, 2012)",
"ref_id": "BIBREF29"
},
{
"start": 780,
"end": 800,
"text": "(Liang et al., 2006)",
"ref_id": "BIBREF17"
},
{
"start": 1028,
"end": 1051,
"text": "DeNero and Klein (2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with State-of-the-Art Models",
"sec_num": "5.2"
},
{
"text": "The training scheme is 5 iterations of jointly training IBM model 1 and 5 iterations of jointly training the tree distance model. 6) Hard-Cohesion: The implemented \"Cohesion Checking Algorithm\" which takes dependency cohesion as a hard constraint during beam search word alignment decoding. We use the model trained by the Agree-HMM system to estimate alignment candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with State-of-the-Art Models",
"sec_num": "5.2"
},
{
"text": "We also build two systems for our soft dependency cohesion model: 7) Soft-Cohesion-EM: the wd-hc-mc sub-model trained with the approximate EM algorithm as described in sub-section 5.1. 8) Soft-Cohesion-Gibbs: the wd-hc-mc sub-model trained with the Gibbs sampling algorithm as described in sub-section 5.1. We train all these systems on the FBIS training set, and test them on the testing set. We also combine the forward and reverse alignments with the grow-diag-final-and (GDFA) heuristic (Koehn et al., 2007) . All AERs are listed in Table 4 . We find our soft cohesion systems produce better AERs than the Hard-Cohesion system as well as the other systems. Table 5 gives the head-modifier cohesion percentage (HCP) and the modifiermodifier cohesion percentage (MCP) of each system. We find HCPs and MCPs of our soft cohesion systems are much closer to the goldstandard alignments.",
"cite_spans": [
{
"start": 491,
"end": 511,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 537,
"end": 544,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 661,
"end": 668,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Comparison with State-of-the-Art Models",
"sec_num": "5.2"
},
{
"text": "To evaluate whether our model is adaptable for large-scale task, we retrained these systems using the LARGE training set. AERs on the testing set are listed in Table 3 6. Compared with Table 4, we 3 Tree-Distance system requires too much memory to run on our server when using the LARGE data set, so we can't get the result. find all the systems yield better performance when using more training data. Our soft cohesion systems still produce better AERs than other systems, suggesting that our soft cohesion model is very effective for large-scale word alignment tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 3",
"ref_id": "TABREF7"
},
{
"start": 185,
"end": 196,
"text": "Table 4, we",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Comparison with State-of-the-Art Models",
"sec_num": "5.2"
},
{
"text": "We then evaluate the effect of word alignment on machine translation quality using the phrase-based translation system Moses (Koehn et al., 2007) . We take NIST MT03 test data as the development set, NIST MT05 test data as the testing set. We train a 5-gram language model with the Xinhua portion of English Gigaword corpus and the English side of the training set using the SRILM Toolkit (Stolcke, 2002) . We train machine translation models using GDFA alignments of each system. BLEU scores on NIST MT05 are listed in Table 7 , where BLEU scores are calculated using lowercased and tokenized data (Papineni et al., 2002) . Although the IBM4-L0, Agree-HMM, Tree-Distance and Hard-Cohesion systems improve word alignment than IBM4, they fail to outperform the IBM4 system on machine translation. The BLEU score of our Soft-Cohesion-EM system is better than the IBM4 system when using the FBIS training set, but worse when using the LARGE training set. Our Soft-Cohesion-Gibbs system produces the best BLEU score when using both training sets. We also performed a statistical significance test using bootstrap resampling with 1000 samples (Koehn, 2004; Zhang et al., 2004) . Experimental results show the Soft-Cohesion-Gibbs system is significantly better (p<0.05) than the IBM4 system. The IBM4-Prior system slightly outperforms IBM4, but it's not significant.",
"cite_spans": [
{
"start": 125,
"end": 145,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF16"
},
{
"start": 389,
"end": 404,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF31"
},
{
"start": 599,
"end": 622,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF24"
},
{
"start": 1138,
"end": 1151,
"text": "(Koehn, 2004;",
"ref_id": "BIBREF15"
},
{
"start": 1152,
"end": 1171,
"text": "Zhang et al., 2004)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 520,
"end": 527,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Machine Translation Quality Comparison",
"sec_num": "5.3"
},
{
"text": "There have been many proposals of integrating syntactic knowledge into generative alignment models. Wu (1997) proposed the inversion transduction grammar (ITG) to model word alignment as synchronous parsing for a sentence pair. Yamada and Knight (2001) represented translation as a sequence of re-ordering operations over child nodes of a syntactic tree. Gildea (2003) introduced a \"loosely\" tree-based alignment technique, which allows alignments to violate syntactic constraints by incurring a cost in probability. Pauls et al. (2010) gave a new instance of the ITG formalism, in which one side of the synchronous derivation is constrained by the syntactic tree. Fox (2002) measured syntactic cohesion in gold standard alignments and showed syntactic cohesion is generally maintained between English and French. She also compared three variant syntactic representations (phrase tree, verb phrase flattening tree and dependency tree), and found the dependency tree produced the highest degree of cohesion. So 2006a) Table 6 : AERs on the testing set (trained on the LARGE data set).",
"cite_spans": [
{
"start": 100,
"end": 109,
"text": "Wu (1997)",
"ref_id": "BIBREF35"
},
{
"start": 228,
"end": 252,
"text": "Yamada and Knight (2001)",
"ref_id": "BIBREF38"
},
{
"start": 355,
"end": 368,
"text": "Gildea (2003)",
"ref_id": "BIBREF10"
},
{
"start": 517,
"end": 536,
"text": "Pauls et al. (2010)",
"ref_id": "BIBREF25"
},
{
"start": 665,
"end": 675,
"text": "Fox (2002)",
"ref_id": "BIBREF7"
},
{
"start": 1010,
"end": 1016,
"text": "2006a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1017,
"end": 1024,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "out directly. Although the alignment quality is improved, they ignored situations where a small set of correct alignments can violate cohesion. To address this limitation, Cherry and Lin (2006b) proposed a soft constraint approach, which took dependency cohesion as a feature of a discriminative model, and verified that the soft constraint works better than the hard constraint. However, the training procedure is very timeconsuming, and they trained the model with only 100 hand-annotated sentence pairs. Therefore, their method is not suitable for large-scale tasks. In this paper, we also use dependency cohesion as a soft constraint. But, unlike Cherry and Lin (2006b) , we integrate the soft dependency cohesion constraint into a generative model that is more suitable for large-scale word alignment tasks.",
"cite_spans": [
{
"start": 172,
"end": 194,
"text": "Cherry and Lin (2006b)",
"ref_id": "BIBREF4"
},
{
"start": 651,
"end": 673,
"text": "Cherry and Lin (2006b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We described a generative model for word alignment that uses dependency cohesion as a soft constraint. We proposed an approximate EM algorithm and an explicit Gibbs sampling algorithm for parameter estimation in an unsupervised manner. Experimental results performed on a large-scale data set show that our model improves word alignment quality as well as machine translation quality. Our experimental results also indicate that the soft constraint approach is much better than the hard constraint approach. It is possible that our word alignment model can be improved further. First, we generated word alignments in both forward and reverse directions separately, but it might be helpful to use dependency trees of the two sides simultaneously. Second, we only used the one-best automatically generated dependency trees in the model. However, errors are inevitable in those trees, so we will investigate how to use N-best dependency trees or dependency forests (Hayashi et al., 2011) to see if they can improve our model.",
"cite_spans": [
{
"start": 962,
"end": 984,
"text": "(Hayashi et al., 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Transactions of the Association for Computational Linguistics, 1 (2013) 291-300. Action Editor: Chris Callison-Burch.Submitted 5/2013; Published 7/2013. c 2013 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.ict.ac.cn/guidelines/guidelines-2007-SSMT(English).doc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Nianwen Xue for insightful discussions on writing this article. We are grateful to anonymous reviewers for many helpful suggestions that helped improve the final version of this article. The research work has been funded by the Hi-Tech Research and Development ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Gappy Phrasal Alignment By Agreement",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Chris Quirk, and Robert Moore, 2011. Gappy Phrasal Alignment By Agreement. In Proc. of ACL 2011.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Della",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra and Robert L. Mercer, 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19 (2). pages 263-311.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A probability model to improve word alignment",
"authors": [
{
"first": "C",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL '03",
"volume": "",
"issue": "",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Cherry and D. Lin, 2003. A probability model to improve word alignment. In Proc. of ACL '03, pages 88-95.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A comparison of syntactically motivated word alignment spaces",
"authors": [
{
"first": "C",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of EACL '06",
"volume": "",
"issue": "",
"pages": "145--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Cherry and D. Lin, 2006a. A comparison of syntactically motivated word alignment spaces. In Proc. of EACL '06, pages 145-152.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Soft syntactic constraints for word alignment through discriminative training",
"authors": [
{
"first": "C",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of COLING/ACL '06",
"volume": "",
"issue": "",
"pages": "105--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Cherry and D. Lin, 2006b. Soft syntactic constraints for word alignment through discriminative training. In Proc. of COLING/ACL '06, pages 105-112.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Tailoring word alignments to syntactic machine translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL '07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero and Dan Klein, 2007. Tailoring word alignments to syntactic machine translation. In Proc. of ACL '07, pages 17.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised word alignment with arbitrary features",
"authors": [
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL '11",
"volume": "",
"issue": "",
"pages": "409--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Dyer, J. Clark, A. Lavie and N.A. Smith, 2011. Unsupervised word alignment with arbitrary features. In Proc. of ACL '11, pages 409-419.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Phrasal cohesion and statistical machine translation",
"authors": [
{
"first": "Heidi",
"middle": [
"J"
],
"last": "Fox",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP '02",
"volume": "",
"issue": "",
"pages": "304--3111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heidi J. Fox, 2002. Phrasal cohesion and statistical machine translation. In Proc. of EMNLP '02, pages 304-3111.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "What's in a translation rule",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of NAACL '04",
"volume": "",
"issue": "",
"pages": "344--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, Daniel Marcu, 2004. What's in a translation rule? In Proc. of NAACL '04, pages 344-352.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A comparison of Bayesian estimators for unsupervised Hidden Markov Model POS taggers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of EMNLP '08",
"volume": "",
"issue": "",
"pages": "344--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Gao and M. Johnson, 2008. A comparison of Bayesian estimators for unsupervised Hidden Markov Model POS taggers. In Proc. of EMNLP '08, pages 344-352.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Loosely Tree-Based Alignment for Machine Translation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL'03",
"volume": "",
"issue": "",
"pages": "80--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea, 2003. Loosely Tree-Based Alignment for Machine Translation. In Proc. of ACL'03, pages 80- 87.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Third-order Variational Reranking on Packed-Shared Dependency Forests",
"authors": [
{
"first": "",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of EMNLP '11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matsumoto, 2011. Third-order Variational Reranking on Packed-Shared Dependency Forests. In Proc. of EMNLP '11.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bayesian inference for PCFGs via Markov chain",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnson, T. Griffiths and S. Goldwater, 2007. Bayesian inference for PCFGs via Markov chain",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Proc. of NAACL '07",
"authors": [
{
"first": "Monte",
"middle": [],
"last": "Carlo",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "139--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Monte Carlo. In Proc. of NAACL '07, pages 139-146.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of EMNLP'04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, 2004. Statistical significance tests for machine translation evaluation. In Proc. of EMNLP'04.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL '07, Demonstration Session",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran and R. Zens, 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL '07, Demonstration Session, pages 177-180.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Alignment by agreement",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of HLT-NAACL 06",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Ben Taskar and Dan Klein, 2006. Alignment by agreement. In Proc. of HLT-NAACL 06, pages 104-111.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Word alignment with cohesion constraint",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of NAACL '03",
"volume": "",
"issue": "",
"pages": "49--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Lin and C. Cherry, 2003. Word alignment with cohesion constraint. In Proc. of NAACL '03, pages 49-51.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improved HMM alignment models for languages with scarce resources",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL Workshop on Building and Using Parallel Texts '05",
"volume": "",
"issue": "",
"pages": "83--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Lopez and Philip Resnik, 2005. Improved HMM alignment models for languages with scarce resources. In ACL Workshop on Building and Using Parallel Texts '05, pages 83-86.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bayesian word alignment for statistical machine translation",
"authors": [
{
"first": "Cos\u0137un",
"middle": [],
"last": "Mermer",
"suffix": ""
},
{
"first": "Murat",
"middle": [],
"last": "Sara\u00e7 Lar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL '11",
"volume": "",
"issue": "",
"pages": "182--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cos\u0137un Mermer and Murat Sara\u00e7 lar, 2011. Bayesian word alignment for statistical machine translation. In Proc. of ACL '11, pages 182-187.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A discriminative framework for bilingual word alignment",
"authors": [
{
"first": "R",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of EMNLP '05",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.C. Moore, 2005. A discriminative framework for bilingual word alignment. In Proc. of EMNLP '05, pages 81-88.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Improved alignment models for statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of EMNLP/WVLC '99",
"volume": "",
"issue": "",
"pages": "20--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F.J. Och, C. Tillmann and H. Ney, 1999. Improved alignment models for statistical machine translation. In Proc. of EMNLP/WVLC '99, pages 20-28.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney, 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29 (1). pages 19-51.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [
"J"
],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward and W.J. Zhu, 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL '02, pages 311- 318.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Unsupervised Syntactic Alignment with Inversion Transduction Grammars",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Pauls",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of NAACL '10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Pauls, Dan Klein, David Chiang and Kevin Knight, 2010. Unsupervised Syntactic Alignment with Inversion Transduction Grammars. In Proc. of NAACL '10.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning accurate, compact, and interpretable tree annotation",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Thibaux",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux and Dan Klein, 2006. Learning accurate, compact, and interpretable tree annotation. In Proc. of ACL 2006.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Hierarchical search for word alignment",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Riesa",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ACL '10",
"volume": "",
"issue": "",
"pages": "157--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Riesa and Daniel Marcu, 2010. Hierarchical search for word alignment. In Proc. of ACL '10, pages 157-166.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Feature-Rich Language-Independent Syntax-Based Alignment for Statistical Machine Translation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Riesa",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Irvine",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of EMNLP '11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Riesa, Ann Irvine and Daniel Marcu, 2011. Feature-Rich Language-Independent Syntax-Based Alignment for Statistical Machine Translation. In Proc. of EMNLP '11.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improving the IBM Alignment Models Using Variational Bayes",
"authors": [
{
"first": "Darcey",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of ACL '12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Darcey Riley and Daniel Gildea, 2012. Improving the IBM Alignment Models Using Variational Bayes. In Proc. of ACL '12.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Word alignment with stochastic bracketing linear inversion transduction grammar",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saers",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of NAACL '10",
"volume": "",
"issue": "",
"pages": "341--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Saers, J. Nivre and D. Wu, 2010. Word alignment with stochastic bracketing linear inversion transduction grammar. In Proc. of NAACL '10, pages 341-344.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "SRILM-an extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "ICSLP '02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke, 2002. SRILM-an extensible language modeling toolkit. In ICSLP '02.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A discriminative matching approach to word alignment",
"authors": [
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lacoste-Julien",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of EMNLP '05",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Taskar, S. Lacoste-Julien and D. Klein, 2005. A discriminative matching approach to word alignment. In Proc. of EMNLP '05, pages 73-80.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Smaller alignment models for better translations: unsupervised word alignment with the l0 norm",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. ACL'12",
"volume": "",
"issue": "",
"pages": "311--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Liang Huang, and David Chiang, 2012. Smaller alignment models for better translations: unsupervised word alignment with the l0 norm. In Proc. ACL'12, pages 311-319.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "HMM-based word alignment in statistical translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of COLING-96",
"volume": "",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Vogel, Hermann Ney and Christoph Tillmann, 1996. HMM-based word alignment in statistical translation. In Proc. of COLING-96, pages 836-841.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "",
"issue": "3",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Wu, 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23 (3). pages 377-403.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Phrase Structure Parsing with Dependency Structure",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of COLING 2010",
"volume": "",
"issue": "",
"pages": "1292--1300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang, Chengqing Zong, 2010. Phrase Structure Parsing with Dependency Structure, In Proc. of COLING 2010, pages 1292-1300.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Parse Reranking Based on Higher-Order Lexical Dependencies",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. Of IJCNLP 2011",
"volume": "",
"issue": "",
"pages": "1251--1259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang, Chengqing Zong, 2011. Parse Reranking Based on Higher-Order Lexical Dependencies, In Proc. Of IJCNLP 2011, pages 1251-1259.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of ACL '01",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight, 2001. A syntax-based statistical translation model. In Proc. of ACL '01, pages 523-530.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Interpreting BLEU/NIST scores: How much improvement do we need to have a better system",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Zhang, Stephan Vogel, and Alex Waibel. 2004. Interpreting BLEU/NIST scores: How much improvement do we need to have a better system? In Proc. of LREC.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A fast fertility hidden Markov model for word alignment using MCMC",
"authors": [
{
"first": "Shaojun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of EMNLP '10",
"volume": "",
"issue": "",
"pages": "596--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaojun Zhao and Daniel Gildea, 2010. A fast fertility hidden Markov model for word alignment using MCMC. In Proc. of EMNLP '10, pages 596-605.",
"links": null
}
},
"ref_entries": {
"TABREF4": {
"text": "The size and the source corpus of the two training sets.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF6": {
"text": "AERs on the testing set (trained on the FBIS data set).",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td/><td>EM</td><td/><td colspan=\"2\">Gibbs</td></tr><tr><td/><td colspan=\"4\">forward reverse forward reverse</td></tr><tr><td>wd</td><td>26.12</td><td>28.66</td><td>27.09</td><td>26.40</td></tr><tr><td>wd-hc</td><td>24.67</td><td>25.86</td><td>26.24</td><td>24.39</td></tr><tr><td>wd-mc</td><td>24.49</td><td>26.53</td><td>25.51</td><td>25.40</td></tr><tr><td>wd-hc-mc</td><td>23.63</td><td>25.17</td><td>24.65</td><td>24.33</td></tr></table>"
},
"TABREF7": {
"text": "AERs on the development set (trained on the FBIS data set).",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF9": {
"text": "HCPs and MCPs on the development set.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td/><td/><td>FBIS LARGE</td></tr><tr><td/><td/><td/><td/><td>IBM4</td><td>30.7</td><td>33.1</td></tr><tr><td/><td/><td/><td/><td>IBM4-L0</td><td>30.4</td><td>32.3</td></tr><tr><td/><td/><td/><td/><td>IBM4-Prior</td><td>30.9</td><td>33.2</td></tr><tr><td/><td/><td/><td/><td>Agree-HMM</td><td>27.2</td><td>30.1</td></tr><tr><td/><td/><td/><td/><td>Tree-Distance</td><td>28.2</td><td>N/A</td></tr><tr><td/><td/><td/><td/><td>Hard-Cohesion</td><td>30.4</td><td>32.2</td></tr><tr><td>IBM4 IBM4-L0 IBM4-Prior</td><td colspan=\"3\">forward reverse GDFA 37.45 39.18 40.52 38.17 38.88 39.82 35.86 36.71 37.08</td><td>Soft-Cohesion-EM Soft-Cohesion-Gibbs 31.6* 30.9 Table 7: BLEU scores, where * indicates 33.1 33.9* significantly better than IBM4 (p&lt;0.05).</td></tr><tr><td>Agree-HMM</td><td>35.58</td><td>35.73</td><td>39.10</td></tr><tr><td>Hard-Cohesion</td><td>35.04</td><td>37.59</td><td>37.63</td></tr><tr><td>Soft-Cohesion-EM</td><td>30.93</td><td>32.67</td><td>33.65</td></tr><tr><td>Soft-Cohesion-Gibbs</td><td>32.07</td><td>32.68</td><td>32.28</td></tr></table>"
}
}
}
}