ACL-OCL / Base_JSON /prefixP /json /P15 /P15-1020.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P15-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:10:58.506920Z"
},
"title": "Syntax-based Simultaneous Translation through Prediction of Unseen Syntactic Constituents",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology Takayamacho",
"location": {
"postCode": "630-0192",
"settlement": "Ikoma",
"region": "Nara",
"country": "Japan"
}
},
"email": "oda.yusuke.on9@is.naist.jp"
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology Takayamacho",
"location": {
"postCode": "630-0192",
"settlement": "Ikoma",
"region": "Nara",
"country": "Japan"
}
},
"email": "neubig@is.naist.jp"
},
{
"first": "Sakriani",
"middle": [],
"last": "Sakti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology Takayamacho",
"location": {
"postCode": "630-0192",
"settlement": "Ikoma",
"region": "Nara",
"country": "Japan"
}
},
"email": "ssakti@is.naist.jp"
},
{
"first": "Tomoki",
"middle": [],
"last": "Toda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology Takayamacho",
"location": {
"postCode": "630-0192",
"settlement": "Ikoma",
"region": "Nara",
"country": "Japan"
}
},
"email": "tomoki@is.naist.jp"
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology Takayamacho",
"location": {
"postCode": "630-0192",
"settlement": "Ikoma",
"region": "Nara",
"country": "Japan"
}
},
"email": "s-nakamura@is.naist.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Simultaneous translation is a method to reduce the latency of communication through machine translation (MT) by dividing the input into short segments before performing translation. However, short segments pose problems for syntaxbased translation methods, as it is difficult to generate accurate parse trees for sub-sentential segments. In this paper, we perform the first experiments applying syntax-based SMT to simultaneous translation, and propose two methods to prevent degradations in accuracy: a method to predict unseen syntactic constituents that help generate complete parse trees, and a method that waits for more input when the current utterance is not enough to generate a fluent translation. Experiments on English-Japanese translation show that the proposed methods allow for improvements in accuracy, particularly with regards to word order of the target sentences.",
"pdf_parse": {
"paper_id": "P15-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "Simultaneous translation is a method to reduce the latency of communication through machine translation (MT) by dividing the input into short segments before performing translation. However, short segments pose problems for syntaxbased translation methods, as it is difficult to generate accurate parse trees for sub-sentential segments. In this paper, we perform the first experiments applying syntax-based SMT to simultaneous translation, and propose two methods to prevent degradations in accuracy: a method to predict unseen syntactic constituents that help generate complete parse trees, and a method that waits for more input when the current utterance is not enough to generate a fluent translation. Experiments on English-Japanese translation show that the proposed methods allow for improvements in accuracy, particularly with regards to word order of the target sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Speech translation is an application of machine translation (MT) that converts utterances from the speaker's language into the listener's language. One of the most identifying features of speech translation is the fact that it must be performed in real time while the speaker is speaking, and thus it is necessary to split a constant stream of words into translatable segments before starting the translation process. Traditionally, speech translation assumes that each segment corresponds to a sentence, and thus performs sentence boundary detection before translation (Matusov et al., 2006) . However, full sentences can be long, particularly in formal speech such as lectures, and if translation does not start until explicit ends of Figure 1 : Simultaneous translation where the source sentence is segmented after \"I think\" and translated according to (a) the standard method, (b) Grissom II et al. (2014) 's method of final verb prediction, and (c) our method of predicting syntactic constituents. sentences, listeners may be forced to wait a considerable time until receiving the result of translation. For example, when the speaker continues to talk for 10 seconds, listeners must wait at least 10 seconds to obtain the result of translation. This is the major factor limiting simultaneity in traditional speech translation systems.",
"cite_spans": [
{
"start": 570,
"end": 592,
"text": "(Matusov et al., 2006)",
"ref_id": "BIBREF14"
},
{
"start": 885,
"end": 909,
"text": "Grissom II et al. (2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 737,
"end": 745,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Simultaneous translation (Section 2) avoids this problem by starting to translate before observing the whole sentence, as shown in Figure 1 (a) . However, as translation starts before the whole sentence is observed, translation units are often not syntactically or semantically complete, and the performance may suffer accordingly. The deleterious effect of this missing information is less worrying in largely monotonic language pairs (e.g. English-French), but cannot be discounted in syntactically distant language pairs (e.g. English-Japanese) that often require long-distance reordering beyond translation units.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 143,
"text": "Figure 1 (a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One way to avoid this problem of missing information is to explicitly predict information needed to translate the content accurately. An ambitious first step in this direction was recently proposed by Grissom II et al. (2014) , who describe a method that predicts sentence-final verbs using reinforcement learning (e.g. Figure 1 (b)). This approach has the potential to greatly decrease the delay in translation from verb-final languages to verbinitial languages (such as German-English), but is also limited to only this particular case.",
"cite_spans": [
{
"start": 201,
"end": 225,
"text": "Grissom II et al. (2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 320,
"end": 328,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a more general method that focuses on a different variety of information: unseen syntactic constituents. This method is motivated by our desire to apply translation models that use source-side parsing, such as tree-to-string (T2S) translation (Huang et al., 2006) or syntactic pre-ordering (Xia and McCord, 2004) , which have been shown to greatly improve translation accuracy over syntactically divergent language pairs. However, conventional methods for parsing are not directly applicable to the partial sentences that arise in simultaneous MT. The reason for this, as explained in detail in Section 3, is that parsing methods generally assume that they are given input that forms a complete syntactic phrase. Looking at the example in Figure 1 , after the speaker has spoken the words \"I think\" we have a partial sentence that will only be complete once we observe the following SBAR. Our method attempts to predict exactly this information, as shown in Figure 1 (c), guessing the remaining syntactic constituents that will allow us to acquire a proper parse tree.",
"cite_spans": [
{
"start": 269,
"end": 289,
"text": "(Huang et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 316,
"end": 338,
"text": "(Xia and McCord, 2004)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 765,
"end": 773,
"text": "Figure 1",
"ref_id": null
},
{
"start": 984,
"end": 992,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Specifically the method consists of two parts: First, we propose a method that trains a statistical model to predict future syntactic constituents based on features of the input segment (Section 4). Second, we demonstrate how to apply this syntac-tic prediction to MT, including the proposal of a heuristic method that examines whether a future constituent has the potential to cause a reordering problem during translation, and wait for more input in these cases (Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on the proposed method, we perform experiments in simultaneous translation of English-Japanese talks (Section 6). As this is the first work applying T2S translation to simultaneous MT, we first compare T2S to more traditional phrase-based techniques. We find that T2S translation is effective with longer segments, but drops off quickly with shorter segments, justifying the need for techniques to handle translation when full context is not available. We then compare the proposed method of predicting syntactic constituents, and find that it improves translation results, particularly with respect to word ordering in the output sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In simultaneous translation, we assume that we are given an incoming stream of words f , which we are expected to translate. As the f is long, we would like to begin translating before we reach the end of the stream. Previous methods to do so can generally be categorized into incremental decoding methods, and sentence segmentation methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "2"
},
{
"text": "In incremental decoding, each incoming word is fed into the decoder one-by-one, and the decoder updates the search graph with the new words and decides whether it should begin translation. Incremental decoding methods have been proposed for phrase-based (Sankaran et al., 2010; Yarmohammadi et al., 2013; Finch et al., 2014) and hierarchical phrase-based (Siahbani et al., 2014) SMT models. 1 Incremental decoding has the advantage of using information about the decoding graph in the choice of translation timing, but also requires significant changes to the internal workings of the decoder, precluding the use of standard decoding tools or techniques.",
"cite_spans": [
{
"start": 254,
"end": 277,
"text": "(Sankaran et al., 2010;",
"ref_id": "BIBREF24"
},
{
"start": 278,
"end": 304,
"text": "Yarmohammadi et al., 2013;",
"ref_id": "BIBREF27"
},
{
"start": 305,
"end": 324,
"text": "Finch et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 355,
"end": 378,
"text": "(Siahbani et al., 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "2"
},
{
"text": "Sentence segmentation methods ( Figure 2 ) provide a simpler alternative by first dividing f into subsequences of 1 or more words [f (1) , . . . , f (N ) ]. These segments are then translated with a traditional decoder into output sequences [e (1) , . . . , e (N ) ], which each are output as soon as translation finishes. Many methods have been proposed to perform segmentation, including the use of prosodic boundaries (F\u00fcgen et al., 2007; Bangalore et al., 2012) , predicting punctuation marks , reordering probabilities of phrases (Fujita et al., 2013) , or models to explicitly optimize translation accuracy (Oda et al., 2014) . Previous work often assumes that f is a single sentence, and focus on sub-sentential segmentation, an approach we follow in this work.",
"cite_spans": [
{
"start": 133,
"end": 136,
"text": "(1)",
"ref_id": null
},
{
"start": 149,
"end": 153,
"text": "(N )",
"ref_id": null
},
{
"start": 260,
"end": 264,
"text": "(N )",
"ref_id": null
},
{
"start": 421,
"end": 441,
"text": "(F\u00fcgen et al., 2007;",
"ref_id": "BIBREF4"
},
{
"start": 442,
"end": 465,
"text": "Bangalore et al., 2012)",
"ref_id": "BIBREF0"
},
{
"start": 535,
"end": 556,
"text": "(Fujita et al., 2013)",
"ref_id": "BIBREF5"
},
{
"start": 613,
"end": 631,
"text": "(Oda et al., 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "2"
},
{
"text": "Sentence segmentation methods have the obvious advantage of allowing for translation as soon as a segment is decided. However, the use of the shorter segments also makes it necessary to translate while part of the utterance is still unknown. As a result, segmenting sentences more aggressively often results in a decrease translation accuracy. This is a problem in phrase-based MT, the framework used in the majority of previous research on simultaneous translation. However, it is an even larger problem when performing translation that relies on parsing the input sentence. We describe the problems caused by parsing a segment f (n) , and solutions, in the following section.",
"cite_spans": [
{
"start": 631,
"end": 634,
"text": "(n)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "2"
},
{
"text": "In standard phrase structure parsing, the parser assumes that each input string is a complete sentence, or at least a complete phrase. For example, Figure 3 (a) shows the phrase structure of the complete sentence \"this is a pen.\" However, in the case of simultaneous translation, each translation unit is not necessarily segmented in a way that guarantees that the translation unit is a complete sentence, so each translation unit should be treated not as a whole, but as a part of a spoken sentence. As a result, the parser input may be an incomplete sequence of words (e.g. \"this is,\" \"is a\"), and a standard parser will generate an incorrect parse as shown in Figures 3(b) and 3(c).",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Difficulties in Incomplete Parsing",
"sec_num": "3.1"
},
{
"text": "The proposed method solves this problem by supplementing unseen syntactic constituents before and after the translation unit. For example, considering parse trees for the complete sentence in Figure 3 (a), we see that a noun phrase (NP) can be placed after the translation unit \"this is.\" If we append the syntactic constituent NP as a \"black box\" before parsing, we can create a syntactically desirable parse tree as shown in Figure 3 (d1) We also can construct another tree as shown in Figure 3(d2) by appending two constituents DT and NN . For the other example \"is a,\" we can create the parse tree in Figure 3 (e1) by appending NP before the unit and NN after the unit, or can create the tree in Figure 3 (e2) by appending only NN after the unit.",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 200,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 427,
"end": 435,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 488,
"end": 494,
"text": "Figure",
"ref_id": null
},
{
"start": 605,
"end": 613,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 700,
"end": 708,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Difficulties in Incomplete Parsing",
"sec_num": "3.1"
},
{
"text": "A typical model for phrase structure parsing is the probabilistic context-free grammar (PCFG). Parsing is performed by finding the parse tree T that maximizes the PCFG probability given a sequence of words w \u2261 [w 1 , w 2 , \u2022 \u2022 \u2022 , w n ] as shown by Eq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "(2):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T * \u2261 arg max T Pr(T |w) (1) \u2243 arg max T [ \u2211 (X\u2192[Y,\u2022\u2022\u2022])\u2208T log Pr(X \u2192 [Y, \u2022 \u2022 \u2022]) + \u2211 (X\u2192w i )\u2208T log Pr(X \u2192 w i ) ],",
"eq_num": "(2)"
}
],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "Pr(X \u2192 [Y, \u2022 \u2022 \u2022])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "represents the generative probabilities of the sequence of constituents",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "[Y, \u2022 \u2022 \u2022]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "given a parent constituent X, and Pr(X \u2192 w i ) represents the generative probabilities of each word w i (1 \u2264 i \u2264 n) given a parent constituent X.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "To consider parsing of incomplete sentences with appended syntactic constituents, We define We assume that both sequences of syntactic constituents L and R are predicted based on the sequence of words w before the main parsing step. Thus, the whole process of parsing incomplete sentences can be described as the combination of predicting both sequences of syntactic constituents represented by Eq. (3) and (4) and parsing with predicted syntactic constituents represented by Eq. (5):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "L \u2261 [L |L| , \u2022 \u2022 \u2022 , L 2 , L 1 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L * \u2261 arg max L Pr(L|w),",
"eq_num": "(3)"
}
],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R * \u2261 arg max R Pr(R|w),",
"eq_num": "(4)"
}
],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "T * \u2261 arg max T Pr(T |L * , w, R * ). (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "Algorithmically, parsing with predicted syntactic constituents can be achieved by simply treating each syntactic constituent as another word in the input sequence and using a standard parsing algorithm such as the CKY algorithm. In this process, the only difference between syntactic constituents and normal words is the probability, which we define as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr(X \u2192 Y ) \u2261 { 1, if Y = X 0, otherwise.",
"eq_num": "(6)"
}
],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "It should be noted that here L refers to syntactic constituents that have already been seen in the past. Thus, it is theoretically possible to store past parse trees as history and generate L based on this history, or condition Eq. 3 based on this information. However, deciding which part of trees to use as L is not trivial, and applying this approach requires that we predict L and R using different methods. Thus, in this study, we use the same method to predict both sequences of constituents for simplicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "In the next section, we describe the actual method used to create a predictive model for these strings of syntactic constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formulation of Incomplete Parsing",
"sec_num": "3.2"
},
{
"text": "In order to define which syntactic constituents should be predicted by our model, we assume that each final parse tree generated by w, L and R must satisfy the following conditions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Syntactic Constituents",
"sec_num": "4"
},
{
"text": "1. The parse tree generated by w, L and R must be \"complete.\" Defining this formally, this means that the root node of the parse tree for the segment must correspond to a node in the parse tree for the original complete sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Syntactic Constituents",
"sec_num": "4"
},
{
"text": "2. Each parse tree contains only L, w and R as terminal symbols.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Syntactic Constituents",
"sec_num": "4"
},
{
"text": "3. The number of nodes is the minimum necessary to satisfy these conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Syntactic Constituents",
"sec_num": "4"
},
{
"text": "As shown in the Figure 3 , there is ambiguity regarding syntactic constituents to be predicted (e.g. we can choose either [ NP ] or [ DT , NN ] as R for w = [ \"this\", \"is\" ]). These conditions avoid ambiguity of which syntactic constituents should predicted for partial sentences in the training data. Looking at the example, Figures 3(d1) and 3(e1) satisfy these conditions, but 3(d2) and 3(e2) do not. Figure 4 shows the statistics of the lengths of L and R sequences extracted according to these criteria for all substrings of the WSJ datasets 2 to 23 of the Penn Treebank (Marcus et al., 1993), a standard training set for English syntactic parsers. From the figure we can see that lengths of up to 2 constituents cover the majority of cases for both L and R, but a significant number of cases require longer strings. Thus methods that predict a fixed number of constituents are not appropriate here. In Algorithm 1, we show the method we propose to Figure 4 : Statistics of numbers of syntactic constituents to be predicted.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 24,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 326,
"end": 339,
"text": "Figures 3(d1)",
"ref_id": "FIGREF1"
},
{
"start": 404,
"end": 412,
"text": "Figure 4",
"ref_id": null
},
{
"start": 954,
"end": 962,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Predicting Syntactic Constituents",
"sec_num": "4"
},
{
"text": "predict R for constituent sequences of an arbitrary length. Here + + represents the concatenation of two sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Syntactic Constituents",
"sec_num": "4"
},
{
"text": "First, our method forcibly parses the input sequence w and retrieves a potentially incorrect parse tree T \u2032 , which is used to calculate features for the prediction model. The next syntactic constituent R + is then predicted using features extracted from w, T \u2032 , and the predicted sequence history R * . This prediction is repeated recurrently until the end-of-sentence symbol (\"nil\" in Algorithm 1) is predicted as the next symbol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Syntactic Constituents",
"sec_num": "4"
},
{
"text": "In this study, we use a multi-label classifier based on linear SVMs (Fan et al., 2008) to predict new syntactic constituents with features shown in Table 1 . We treat the input sequence w and predicted syntactic constituents R * as a concatenated sequence w + + R * . For example, if we have w = [ this, is, a ] and R * = [ NN ], then the word features \"3 rightmost 1-grams\" will take the values \"is,\" \"a,\" and NN . Tags of semi-terminal nodes in T \u2032 are used as part-of-speech (POS) tags for corresponding words and the POS of each predicted syntactic constituent is simply its tag. \"nil\" is used when some information is not available. For example, if we have w = [ this, is ] and R * = [ ] then \"3 rightmost 1-grams\" will take the values \"nil,\" \"this,\" and \"is.\" Algorithm 1 and Table 1 shows the method used to predict R * but L * can be predicted by performing the prediction process in the reverse order.",
"cite_spans": [
{
"start": 68,
"end": 86,
"text": "(Fan et al., 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Predicting Syntactic Constituents",
"sec_num": "4"
},
{
"text": "Once we have created a tree from the sequence L * + + w + + R * by performing PCFG parsing with predicted syntactic constituents according to Eqs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-string SMT with Syntactic Constituents",
"sec_num": "5"
},
{
"text": "(2), (5), and (6), the next step is to use this tree in translation. In this section, we focus specifically Algorithm 1 Prediction algorithm for following constituents It should be noted that using these trees in T2S translation models is not trivial because each estimated syntactic constituent should be treated as an aggregated entity representing all possibilities of subtrees rooted in such a constituent. Specifically, there are two problems: the possibility of reordering an as-of-yet unseen syntactic constituent into the middle of the translated sentence, and the calculation of language model probabilities considering syntactic constituent tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-string SMT with Syntactic Constituents",
"sec_num": "5"
},
{
"text": "R * T \u2032 \u2190 arg max T Pr(T |w) R * \u2190 [ ] loop R + \u2190 arg max R Pr(R|T \u2032 , R * ) if R + = nil then return R * end if R * \u2190 R * + +[R + ] end loop",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-string SMT with Syntactic Constituents",
"sec_num": "5"
},
{
"text": "With regards to the first problem of reordering, consider the example of English-Japanese translation in Figure 5(b) , where a syntactic constituent PP is placed at the end of the English sequence (R * ), but the corresponding entity in the Japanese translation result should be placed in the middle of the sentence. In this case, if we attempt to translate immediately, we will have to omit the as-of-yet unknown PP from our translation and translate it later, resulting in an unnatural word ordering in the Thus, if any of the syntactic constituents in R are placed anywhere other than the end of the translation result, we can assume that this is a hint that the current segmentation boundary is not appropriate. Based on this intuition, we propose a heuristic method that ignores segmentation boundaries that result in a translation of this type, and instead wait for the next translation unit, helping to avoid problems due to inappropriate segmentation boundaries. Algorithm 2 formally describes this waiting method.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 116,
"text": "Figure 5(b)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Tree-to-string SMT with Syntactic Constituents",
"sec_num": "5"
},
{
"text": "The second problem of language model probabilities arises because we are attempting to generate a string of words, some of which are not actual words but tags representing syntactic constituents. Creating a language model that contains probabilities for these tags in the appropriate places is not trivial, so for simplicity, we simply assume that every syntactic constituent tag is an unknown word, and that the output of translation consists of both translated normal words and non-translated tags as shown in Figure 5 . We relegate a more complete handling of these tags to future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 512,
"end": 520,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Tree-to-string SMT with Syntactic Constituents",
"sec_num": "5"
},
{
"text": "Algorithm 2 Waiting algorithm for T2S SMT We perform 2 types of experiments to evaluate the effectiveness of the proposed methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-string SMT with Syntactic Constituents",
"sec_num": "5"
},
{
"text": "w \u2190 [ ] loop w \u2190 w + + NextSegment() L * \u2190 arg max L Pr(L|w) R * \u2190 arg max R Pr(R|w) T * \u2190 arg max T Pr(T |L * , w, R * ) e * \u2190",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-string SMT with Syntactic Constituents",
"sec_num": "5"
},
{
"text": "In the first experiment, we evaluate prediction accuracies of unseen syntactic constituents L and R. To do so, we train a predictive model as described in Section 4 using an English treebank and evaluate its performance. To create training and testing data, we extract all substrings w s.t. |w| \u2265 2 in the Penn Treebank and calculate the corresponding syntactic constituents L and R by according to the original trees and substring w. We use the 90% of the extracted data for training a classifier and the remaining 10% for testing estimation recall, precision and F-measure. We use the Ckylark parser (Oda et al., 2015) to generate T \u2032 from w.",
"cite_spans": [
{
"start": 602,
"end": 620,
"text": "(Oda et al., 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Syntactic Constituents",
"sec_num": "6.1.1"
},
{
"text": "Next, we evaluate the performance of T2S simultaneous translation adopting the two proposed methods. We use data of TED talks from the English-Japanese section of WIT3 (Cettolo et al., 2012) , and also append dictionary entries and examples in Eijiro 3 to the training data to increase the vocabulary of the translation model. The total number of sentences/entries is 2.49M (WIT3, Eijiro), 998 (WIT3), and 468 (WIT3) sentences for training, development, and testing respectively.",
"cite_spans": [
{
"start": 168,
"end": 190,
"text": "(Cettolo et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.1.2"
},
{
"text": "We use the Stanford Tokenizer 4 for English tokenization, KyTea (Neubig et al., 2011) for Japanese tokenization, GIZA++ (Och and Ney, 2003) to construct word alignment, and KenLM (Heafield et al., 2013) to generate a 5-gram target language model. We use the Ckylark parser, which we modified to implement the parsing method of Section 3.2, to generate T * from L * , w and R * .",
"cite_spans": [
{
"start": 64,
"end": 85,
"text": "(Neubig et al., 2011)",
"ref_id": "BIBREF15"
},
{
"start": 120,
"end": 139,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF17"
},
{
"start": 179,
"end": 202,
"text": "(Heafield et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.1.2"
},
{
"text": "We use Travatar (Neubig, 2013) to train the T2S translation model used in the proposed method, and also Moses (Koehn et al., 2007) to train phrase-based translation models that serve as a baseline. Each translation model is tuned using MERT (Och, 2003) to maximize BLEU (Papineni et al., 2002) . We evaluate translation accuracies by BLEU and also RIBES (Isozaki et al., 2010) , a reordering-focused metric which has achieved high correlation with human evaluation on English-Japanese translation tasks.",
"cite_spans": [
{
"start": 16,
"end": 30,
"text": "(Neubig, 2013)",
"ref_id": "BIBREF16"
},
{
"start": 110,
"end": 130,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF11"
},
{
"start": 241,
"end": 252,
"text": "(Och, 2003)",
"ref_id": "BIBREF18"
},
{
"start": 270,
"end": 293,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF21"
},
{
"start": 354,
"end": 376,
"text": "(Isozaki et al., 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.1.2"
},
{
"text": "We perform tests using two different sentence segmentation methods. The first is n-words segmentation , a simple heuristic that simply segments the input every n words. This method disregards syntactic and semantic units in the original sentence, allowing us to evaluate the robustness of translation against poor segmentation boundaries. The second method is the state-of-the-art segmentation strategy proposed by Oda et al. (2014) , which finds segmentation boundaries that optimize the accuracy of the translation output. We use BLEU+1 (Lin and Och, 2004) as the objective of this segmentation strategy.",
"cite_spans": [
{
"start": 415,
"end": 432,
"text": "Oda et al. (2014)",
"ref_id": "BIBREF19"
},
{
"start": 539,
"end": 558,
"text": "(Lin and Och, 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.1.2"
},
{
"text": "We evaluate the following baseline and proposed methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.1.2"
},
{
"text": "PBMT is a baseline using phrase-based SMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.1.2"
},
{
"text": "T2S uses T2S SMT with parse trees generated from only w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.1.2"
},
{
"text": "T2S-Tag further predicts unseen syntactic constituents according to Section 4. Before evaluation, all constituent tags are simply deleted from the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.1.2"
},
{
"text": "T2S-Wait uses T2S-Tag and adds the waiting strategy described in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.1.2"
},
{
"text": "We also show PBMT-Sent and T2S-Sent which are full sentence-based PBMT and T2S systems. Table 2 shows the recall, precision, and F-measure of the estimated L and R sequences. The table shows results of two evaluation settings, where the order of generated constituents is considered or not.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 95,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.1.2"
},
{
"text": "We can see that in each case recall is lower than the corresponding precision and the performance of L differs between ordered and unordered results. These trends result from the fact that the model generates fewer constituents than exist in the test data. However, this trend is not entirely unexpected because it is not possible to completely accurately guess syntactic constituents from every substring w. For example, parts of the sentence \"in the next 18 minutes\" can generate the sequence \"in the next CD NN \" and \" IN DT JJ 18 minutes,\" but the constituents CD in the former case and DT and JJ in the latter case are not necessary in all situations. In contrast, NN and IN will probably be inserted most cases. As a result, the appearance of such ambiguous constituents in the training data is less consistent than that of necessary syntactic constituents, and thus the prediction model avoids generating such ambiguous constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Syntactic Constituents",
"sec_num": "6.2.1"
},
{
"text": "Next, we evaluate the translation results achieved by the proposed method. Figures 6 and 7 show the relationship between the mean number of words in the translation segments and translation accuracy of BLEU and RIBES respectively. Each horizontal axis of these graphs indicates the mean number of words in translation units that are used to generate the actual translation output, and these can be assumed to be proportional to the mean waiting time for listeners. In cases except T2S-Wait, these values are equal to the mean length of translation unit generated by the segmentation strategies, and in the case of T2S-Wait, this value shows the length of the translation units concatenated by the waiting strategy. First looking at the full sentence results (rightmost points in each graph), we can see that T2S greatly outperforms PBMT on full sentences, underlining the importance of considering syntax for this language pair. Turning to simultaneous translation, we first consider the case of n-words segmentation, which will demonstrate robustness of each method to poorly formed translation segments. When we compare PBMT and T2S, we can see that T2S is superior for longer segments, but on shorter segments performance is greatly reduced, dropping below that of PBMT in BLEU at an average of 6 words, and RIBES at an average of 4 words. This trend is reasonable, considering that shorter translation units will result in syntactically inconsistent units and thus incorrect parse trees. Next looking at the results for T2S-Tag, we can see that in the case of the n-words segmentation, it is able to maintain the same translation performance of PBMT, even at the shorter settings. Furthermore, T2S-Wait also maintains the same performance of T2S-Tag in BLEU and achieves much higher performance than any of the other methods in RIBES, particularly with regards to shorter translation units. This result shows that the method of waiting for more input in the face of potential re-ordering problems is highly effective in maintaining the correct ordering of the output.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 90,
"text": "Figures 6 and 7",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.2.2"
},
{
"text": "In the case of the optimized segmentation, all three T2S methods maintain approximately the same performance, consistently outperforming PBMT in RIBES, and crossing in BLEU around 5-6 words. From this, we can hypothesize that the optimized segmentation strategy learns features that maintain some syntactic consistency, which plays a similar role to the proposed method. However, RIBES scores for T2S-Wait is still generally higher than the other methods, demonstrating that waiting maintains its reordering advantage even in the optimized segmentation case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Translation",
"sec_num": "6.2.2"
},
{
"text": "In this paper, we proposed the first method to apply SMT using source syntax to simultaneous translation. Especially, we proposed methods to maintain the syntactic consistency of translation units by predicting unseen syntactic constituents, and waiting until more input is available when it is necessary to achieve good translation results. Ex-periments on an English-Japanese TED talk translation task demonstrate that our methods are more robust to short, inconsistent translation segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "As future work, we are planning to devise more sophisticated methods for language modeling using constituent tags, and ways to incorporate previously translated segments into the estimation process for left-hand constituents. Next, our method to predict additional constituents does not target the grammatically correct translation units for which L = [ ] and R = [ ], although there is still room for improvement in this assumption. In addition, we hope to expand the methods proposed here to a more incremental setting, where both parsing and decoding are performed incrementally, and the information from these processes can be reflected in the decision of segmentation boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "There is also one previous rule-based system that uses syntax in incremental translation, but it is language specific and limited domain(Ryu et al., 2006), and thus difficult to compare with our SMT-based system. It also does not predict unseen constituents, relying only on the observed segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is also potentially possible to create a predictive model for the actual content of the PP as done for sentence-final verbs byGrissom II et al. (2014), but the space of potential prepositional phrases is huge, and we leave this non-trivial task for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://eijiro.jp/ 4 http://nlp.stanford.edu/software/tokenizer.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Part of this work was supported by JSPS KAK-ENHI Grant Number 24240032, and Grant-in-Aid for JSPS Fellows Grant Number 15J10649.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Real-time incremental speech-tospeech translation of dialogs",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kumar Rangarajan",
"suffix": ""
},
{
"first": "Prakash",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "Ladan",
"middle": [],
"last": "Kolan",
"suffix": ""
},
{
"first": "Aura",
"middle": [],
"last": "Golipour",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jimenez",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore, Vivek Kumar Rangarajan Srid- har, Prakash Kolan, Ladan Golipour, and Aura Jimenez. 2012. Real-time incremental speech-to- speech translation of dialogs. In Proc. NAACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "WIT 3 : Web inventory of transcribed and translated talks",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Girardi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. EAMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. WIT 3 : Web inventory of transcribed and translated talks. In Proc. EAMT.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "LIBLINEAR: A library for large linear classification",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Rong-En Fan",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiang-Rui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "The Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. The Journal of Machine Learning Research.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An exploration of segmentation strategies in stream decoding",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Finch",
"suffix": ""
},
{
"first": "Xiaolin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Finch, Xiaolin Wang, and Eiichiro Sumita. 2014. An exploration of segmentation strategies in stream decoding. In Proc. IWSLT.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Simultaneous translation of lectures and speeches. Machine Translation",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "F\u00fcgen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "Muntsin",
"middle": [],
"last": "Kolss",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian F\u00fcgen, Alex Waibel, and Muntsin Kolss. 2007. Simultaneous translation of lectures and speeches. Machine Translation, 21.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Simple, lexicalized choice of translation timing for simultaneous speech translation",
"authors": [
{
"first": "Tomoki",
"middle": [],
"last": "Fujita",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Sakriani",
"middle": [],
"last": "Sakti",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Toda",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomoki Fujita, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2013. Sim- ple, lexicalized choice of translation timing for si- multaneous speech translation. In Proc. Interspeech.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dont until the final verb wait: Reinforcement learning for simultaneous machine translation",
"authors": [
{
"first": "Alvin",
"middle": [],
"last": "Grissom",
"suffix": ""
},
{
"first": "I",
"middle": [
"I"
],
"last": "",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum\u00e9 III. 2014. Dont until the final verb wait: Reinforcement learning for simulta- neous machine translation. In Proc. EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Scalable modified Kneser-Ney language model estimation",
"authors": [
{
"first": "Clark",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proc. ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Statistical syntax-directed translation with extended domain of locality",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proc. AMTA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic evaluation of translation quality for distant language pairs",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic evaluation of translation quality for distant language pairs. In Proc. EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ORANGE: a method for evaluating automatic evaluation metrics for machine translation",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Franz Josef Och. 2004. ORANGE: a method for evaluating automatic evaluation met- rics for machine translation. In Proc. COLING.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Building a large annotated corpus of english: The Penn treebank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large anno- tated corpus of english: The Penn treebank. Com- putational linguistics, 19(2).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic sentence segmentation and punctuation prediction for spoken language translation",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "Arne",
"middle": [],
"last": "Mauser",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeny Matusov, Arne Mauser, and Hermann Ney. 2006. Automatic sentence segmentation and punc- tuation prediction for spoken language translation. In Proc. IWSLT.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Pointwise prediction for robust, adaptable japanese morphological analysis",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yosuke",
"middle": [],
"last": "Nakata",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. ACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable japanese morphological analysis. In Proc. ACL- HLT.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Travatar: A forest-to-string machine translation engine based on tree transducers",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig. 2013. Travatar: A forest-to-string machine translation engine based on tree transduc- ers. In Proc. ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Optimizing segmentation strategies for simultaneous speech translation",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Sakriani",
"middle": [],
"last": "Sakti",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Toda",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Optimiz- ing segmentation strategies for simultaneous speech translation. In Proc. ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Ckylark: A more robust PCFG-LA parser",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Sakriani",
"middle": [],
"last": "Sakti",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Toda",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Ckylark: A more robust PCFG-LA parser. In Proc. NAACL- HLT.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proc. ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Segmentation strategies for streaming speech translation",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Kumar Rangarajan",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bangalore",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Andrej Ljolje, and Rathinavelu Chengal- varayan. 2013. Segmentation strategies for stream- ing speech translation. In Proc. NAACL-HLT.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Simultaneous english-japanese spoken language translation based on incremental dependency parsing and transfer",
"authors": [
{
"first": "Koichiro",
"middle": [],
"last": "Ryu",
"suffix": ""
},
{
"first": "Shigeki",
"middle": [],
"last": "Matsubara",
"suffix": ""
},
{
"first": "Yasuyoshi",
"middle": [],
"last": "Inagaki",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koichiro Ryu, Shigeki Matsubara, and Yasuyoshi In- agaki. 2006. Simultaneous english-japanese spo- ken language translation based on incremental de- pendency parsing and transfer. In Proc. COLING.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Incremental decoding for phrase-based statistical machine translation",
"authors": [
{
"first": "Ajeet",
"middle": [],
"last": "Baskaran Sankaran",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Grewal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baskaran Sankaran, Ajeet Grewal, and Anoop Sarkar. 2010. Incremental decoding for phrase-based statis- tical machine translation. In Proc. WMT.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Incremental translation using hierarchical phrasebased translation system",
"authors": [
{
"first": "Maryam",
"middle": [],
"last": "Siahbani",
"suffix": ""
},
{
"first": "Ramtin",
"middle": [
"Mehdizadeh"
],
"last": "Seraj",
"suffix": ""
},
{
"first": "Baskaran",
"middle": [],
"last": "Sankaran",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. SLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maryam Siahbani, Ramtin Mehdizadeh Seraj, Baskaran Sankaran, and Anoop Sarkar. 2014. Incremental translation using hierarchical phrase- based translation system. In Proc. SLT.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving a statistical MT system with automatically learned rewrite patterns",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Mccord",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Xia and Michael McCord. 2004. Improving a statistical MT system with automatically learned rewrite patterns. In Proc. COLING.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Incremental segmentation and decoding strategies for simultaneous translation",
"authors": [
{
"first": "Mahsa",
"middle": [],
"last": "Yarmohammadi",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kumar Rangarajan",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "Baskaran",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sankaran",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahsa Yarmohammadi, Vivek Kumar Rangara- jan Sridhar, Srinivas Bangalore, and Baskaran Sankaran. 2013. Incremental segmentation and decoding strategies for simultaneous translation. In Proc. IJCNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Process of English-Japanese simultaneous translation with sentence segmentation.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Phrase structures with surrounding syntactic constituents.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "as the sequence of preceding syntactic constituents of the translation unit and R \u2261 [R 1 , R 2 , \u2022 \u2022 \u2022 , R |R| ] as the sequence of following syntactic constituents of the translation unit. For the example Figure 3(d1), we assume that L = [ ] and R = [ NP ].",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "Waiting for the next translation unit. target sentence. 2",
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"text": "Mean #words and BLEU scores of each method. (a) n-words segmentation (b) optimized segmentation",
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"text": "Mean #words and RIBES scores of each method.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Features used in predicting syntactic constituents.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"2\">Type Feature</td></tr><tr><td colspan=\"2\">Words 3 leftmost 1,2-grams in w + + R *</td></tr><tr><td/><td>3 rightmost 1,2-grams in w + + R *</td></tr><tr><td/><td>Left/rightmost pair in w + + R *</td></tr><tr><td>POS</td><td>Same as \"Words\"</td></tr><tr><td colspan=\"2\">Parse Tag of the root node</td></tr><tr><td/><td>Tags of children of the root node</td></tr><tr><td/><td>Pairs of root and children nodes</td></tr><tr><td colspan=\"2\">Length |w|</td></tr><tr><td/><td>|R * |</td></tr><tr><td colspan=\"2\">on T2S translation, which we use in our experi-</td></tr><tr><td colspan=\"2\">ments, but it is likely that similar methods are ap-</td></tr><tr><td colspan=\"2\">plicable to other uses of source-side syntax such</td></tr><tr><td colspan=\"2\">as pre-ordering as well.</td></tr></table>"
},
"TABREF2": {
"text": "Performance of syntactic constituent prediction.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>Target</td><td>P %</td><td>R %</td><td>F %</td></tr><tr><td>L</td><td>(ordered)</td><td>31.93</td><td colspan=\"2\">7.27 11.85</td></tr><tr><td/><td colspan=\"4\">(unordered) 51.21 11.66 19.00</td></tr><tr><td>R</td><td>(ordered)</td><td colspan=\"3\">51.12 33.78 40.68</td></tr><tr><td/><td colspan=\"4\">(unordered) 52.77 34.87 42.00</td></tr></table>"
}
}
}
}