ACL-OCL / Base_JSON /prefixE /json /E14 /E14-1028.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E14-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:39:13.025225Z"
},
"title": "Word Ordering with Phrase-Based Grammars",
"authors": [
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Tomalin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe an approach to word ordering using modelling techniques from statistical machine translation. The system incorporates a phrase-based model of string generation that aims to take unordered bags of words and produce fluent, grammatical sentences. We describe the generation grammars and introduce parsing procedures that address the computational complexity of generation under permutation of phrases. Against the best previous results reported on this task, obtained using syntax driven models, we report huge quality improvements, with BLEU score gains of 20+ which we confirm with human fluency judgements. Our system incorporates dependency language models, large n-gram language models, and minimum Bayes risk decoding.",
"pdf_parse": {
"paper_id": "E14-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe an approach to word ordering using modelling techniques from statistical machine translation. The system incorporates a phrase-based model of string generation that aims to take unordered bags of words and produce fluent, grammatical sentences. We describe the generation grammars and introduce parsing procedures that address the computational complexity of generation under permutation of phrases. Against the best previous results reported on this task, obtained using syntax driven models, we report huge quality improvements, with BLEU score gains of 20+ which we confirm with human fluency judgements. Our system incorporates dependency language models, large n-gram language models, and minimum Bayes risk decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word ordering is a fundamental problem in NLP and has been shown to be NP-complete in discourse ordering (Althaus et al., 2004) and in SMT with arbitrary word reordering (Knight, 1999) . Typical solutions involve constraints on the space of permutations, as in multi-document summarisation (Barzilay and Elhadad, 2011) and preordering in SMT (Tromble and Eisner, 2009; Genzel, 2010) .",
"cite_spans": [
{
"start": 105,
"end": 127,
"text": "(Althaus et al., 2004)",
"ref_id": "BIBREF1"
},
{
"start": 170,
"end": 184,
"text": "(Knight, 1999)",
"ref_id": "BIBREF20"
},
{
"start": 290,
"end": 318,
"text": "(Barzilay and Elhadad, 2011)",
"ref_id": "BIBREF3"
},
{
"start": 342,
"end": 368,
"text": "(Tromble and Eisner, 2009;",
"ref_id": "BIBREF32"
},
{
"start": 369,
"end": 382,
"text": "Genzel, 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some recent work attempts to address the fundamental word ordering task directly, using syntactic models and heuristic search. Wan et al. (2009) use a dependency grammar to address word ordering, while Zhang and Clark (2011; use CCG and large-scale n-gram language models.",
"cite_spans": [
{
"start": 127,
"end": 144,
"text": "Wan et al. (2009)",
"ref_id": "BIBREF34"
},
{
"start": 202,
"end": 224,
"text": "Zhang and Clark (2011;",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These techniques are applied to the unconstrained problem of generating a sentence from a multi-set of input words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe GYRO (Get Your Order Right), a phrase-based approach to word ordering. Given a bag of words, the system first scans a large, trusted text collection and extracts phrases consisting of words from the bag. Strings are then generated by concatenating these phrases in any order, subject to the constraint that every string is a valid reordering of the words in the bag, and the results are scored under an n-gram language model (LM). The motivation is that it is easier to make fluent sentences from phrases (snippets of fluent text) than from words in isolation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "GYRO builds on approaches developed for syntactic SMT (Chiang, 2007; de Gispert et al., 2010; Iglesias et al., 2011) . The system generates strings in the form of weighted automata which can be rescored using higher-order n-gram LMs, dependency LMs (Shen et al., 2010) , and Minimum Bayes Risk decoding, either using posterior probabilities obtained from GYRO or SMT systems.",
"cite_spans": [
{
"start": 54,
"end": 68,
"text": "(Chiang, 2007;",
"ref_id": "BIBREF10"
},
{
"start": 69,
"end": 93,
"text": "de Gispert et al., 2010;",
"ref_id": "BIBREF12"
},
{
"start": 94,
"end": 116,
"text": "Iglesias et al., 2011)",
"ref_id": "BIBREF19"
},
{
"start": 249,
"end": 268,
"text": "(Shen et al., 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We report extensive experiments using BLEU and conclude with human assessments. We show that despite its relatively simple formulation, GYRO gives BLEU scores over 20 points higher than the best previously reported results, generated by a syntax-based ordering system. Human fluency assessments confirm these substantial improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We take as input a bag of N words \u2126 = {w 1 , . . . , w N }. The words are sorted, e.g. alphabetically, so that it is possible to refer to the i th word in the bag, and repeated words are distinct tokens. We also take a set of phrases, L(\u2126) that are extracted from large text collections, and contain only words from \u2126. We refer to phrases as u, i.e. u \u2208 L(\u2126). The goal is to generate all permutations of \u2126 that can be formed by concatenation of phrases from L(\u2126).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-based Word Ordering",
"sec_num": "2"
},
{
"text": "Consider a subset A \u2282 \u2126. We can represent A by an N-bit binary string I(A) = I 1 (A) . . . I N (A), where I i (A) = 1 if w i \u2208 A, and I i (A) = 0 otherwise. A Context-Free Grammar (CFG) for generation can then be defined by the following rules: Phrase-based Rules: \u2200A \u2282 \u2126 and \u2200u \u2208 L(A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Order Generation Grammar",
"sec_num": "2.1"
},
{
"text": "I(A) \u2192 u Concatenation Rules: \u2200A \u2282 \u2126, B \u2282 A, C \u2282 A such that I(A) = I(B)+I(C) and I(B)\u2022I(C) = 0 I(A) \u2192 I(B) I(C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Order Generation Grammar",
"sec_num": "2.1"
},
{
"text": "where \u2022 is the bit-wise logical AND Root:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Order Generation Grammar",
"sec_num": "2.1"
},
{
"text": "S \u2192 I(\u2126)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Order Generation Grammar",
"sec_num": "2.1"
},
{
"text": "We use this grammar to 'parse' the list of the words in the bag \u2126. The grammar has one nonterminal per possible binary string, so potentially 2 N distinct nonterminals might be needed to generate the language. Each nonterminal can produce either a phrase u \u2208 L(A), or the concatenation of two binary strings that share no bits in common. A derivation is sequence of rules that starts from the bit string I(\u2126). Rules are unweighted in this basic formulation. For example, assume the following bag \u2126 = {a, b, c, d, e}, which we sort alphabetically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Order Generation Grammar",
"sec_num": "2.1"
},
{
"text": "Assume the phrases are L(\u2126) = {\"a b\", \"b a\", \"d e c\"}. The generation grammar contains the following 6 rules: Figure 1 represents all the possible derivations in a hypergraph, which generate four alternative strings. For example, string \"d e c b a\" is obtained with derivation R 6 R 5 R 3 R 2 , whereas string \"a b d e c\" is obtained via R 6 R 4 R 1 R 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Order Generation Grammar",
"sec_num": "2.1"
},
{
"text": "R 1 : 11000\u2192 ab R 2 : 11000\u2192 ba R 3 : 00111\u2192 dec R 4 : 11111\u2192 11000 00111 R 5 : 11111\u2192 00111 11000 R 6 : S\u2192 11111",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Order Generation Grammar",
"sec_num": "2.1"
},
{
"text": "We now describe a general algorithm for parsing a bag of words with phrase constraints. The search is organized along a two-dimensional grid M [x, y] of 2 N -1 cells, where each cell is associated with a unique nonterminal in the grammar (a bit string I with at least one bit set to 1). Each row x in the grid has N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing a Bag of Words",
"sec_num": "2.2"
},
{
"text": "x cells, representing all the possible ways of covering exactly x words from the bag. There are N rows in total.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing a Bag of Words",
"sec_num": "2.2"
},
{
"text": "For a bit string I, X(I) is the length of I, i.e. the number of 1's in I. In this way X(I(A)) points to the row associated with set A. There is no natural ordering of cells within a row, so we introduce a second function Y (I) which indicates which cell in row X(I) is associated with I. The basic parsing algorithm is given in Figure 2 . We first initialize the grid by filling the cells linked to phrase-based rules (lines 1-4 of Figure 2 ). Then parsing proceeds as follows. For each row in increasing order (line 5), and for each of the nonempty cells in the row (line 6), try to combine its bit string with any other bit strings (lines 7-8). If combination is admitted, then form the resultant bit string and add the concatenation rule to the associated cell in the grid (lines 9-10). The combination will always yield a bit string that resides in a higher row of the grid, so search is exhaustive. The number of cells will grow exponentially as the bag grows in size. In practice, the number of PARSE-BAG-OF-WORDS Input: bag of words \u2126 of size N Input: list of phrases L(\u2126) Initialize -Add phrase-based rules: cells actually used in parsing can be smaller than 2 N \u2212 1. This depends strongly on the number of distinct phrase-based rules and the distinct subsets of \u2126 they cover. For example, if we consider 1word subsets of \u2126, then all cells are needed and GYRO attempts all word permutation. However, if only 10 distinct 5-word phrases and 20 distinct 4-word phrases are considered for a bag of N=9 words, then fewer than 431 cells will be used (20 + 10 for the initial cells at rows 4 and 5; plus all combinations of 4-word subsets into row 8, which is less than 400; plus 1 for the last cell at row 9).",
"cite_spans": [],
"ref_spans": [
{
"start": 328,
"end": 336,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 432,
"end": 440,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Parsing a Bag of Words",
"sec_num": "2.2"
},
{
"text": "1 M [x, y] \u2190 \u2205 2 for each subset A \u2208 \u2126 3 for each phrase u \u2208 L(A) 4 add rule I(A) \u2192 u to cell M [X(I(A)), Y (I(A))] Parse: 5 for each row x = 1, . . . , N 6 for each y = 1, . . . , N x 7 for each valid A \u2208 \u2126 8 if Ix,y \u2022 I(A) = 0, then 9 I \u2190 Ix,y + I(A) 10 add rule I \u2192 Ix,y I(A) to cell M [X(I ), Y (I )] 11 if |M [N, 1]| > 0, success.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing a Bag of Words",
"sec_num": "2.2"
},
{
"text": "We are interested in producing the space of word sequences generated by the grammar, and in scoring each of the sequences according to a wordbased n-gram LM. Assuming that parsing the bag of words suceeded, this is a very similar scenario to that of syntax-based approaches to SMT: the output is a large collection of word sequences, which are built by putting together smaller units and which can be found by a process of expansion, i.e. by traversing the back-pointers from an initial cell in a grid structure. A significant difference is that in syntax-based approaches the parsing stage tends to be computationally easier than the parsing stage has only a quadratic dependency on the length of the input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation from Exact Parsing",
"sec_num": "2.3"
},
{
"text": "We borrow techniques from SMT to represent and manipulate the space of generation hypotheses. Here we follow the approach of expanding this space onto a Finite-State Automata (FSA) described in (de Gispert et al., 2010; Iglesias et al., 2011) . This means that in parsing, each cell M [x, y] is associated with an FSA F x,y , which encodes all the sequences generated by the grammar when covering the words marked by the bit string of that cell. When a rule is added to a cell, a new path from the initial to the final state of F x,y is created so that each FSA is the union of all paths arising from the rules added to the cell. Importantly, when an instance of the concatenation rule is added to a cell, the new path is built with only two arcs. These point to other FSAs at lower rows in the grid so that the result has the form of a Recursive Transition Network with a finite depth of recursion. Following the example from Section 2.1, the top three FSAs in Figure 3 represent the RTN for example from Figure 1 .",
"cite_spans": [
{
"start": 194,
"end": 219,
"text": "(de Gispert et al., 2010;",
"ref_id": "BIBREF12"
},
{
"start": 220,
"end": 242,
"text": "Iglesias et al., 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 962,
"end": 970,
"text": "Figure 3",
"ref_id": "FIGREF5"
},
{
"start": 1006,
"end": 1014,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generation from Exact Parsing",
"sec_num": "2.3"
},
{
"text": "The parsing algorithm is modified as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation from Exact Parsing",
"sec_num": "2.3"
},
{
"text": "4 add rule I(A) \u2192 u as path to FSA F X(I(A)),Y (I(A)) ... 10 add rule I \u2192 Ix,y I(A) as path to FSA F X(I ),Y (I ) 11 if NumStates(FN,1) > 1, success.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation from Exact Parsing",
"sec_num": "2.3"
},
{
"text": "At this point we specify two strategies: Algorithm 1: Full expansion is described by the pseudocode in Figure 4 , excluding lines 2-3. A recursive FSA replacement operation (Allauzen et al., 2007) can be used to expand the FSA in the top-most cell. In our running example, the result is the FSA at the bottom of Figure 3 . We then apply a word-based LM to the resulting FSA via standard FSA composition. This outputs the complete (unpruned) language of interest, where each word sequence generated from the bag according to the phrasal constraints is scored by the LM. Algorithm 2: Pruned expansion is described by the pseudocode in Figure 4 , now including lines 2-3. We introduce pruning because full, unpruned expansion may not be feasible for large bags with many phrasal rules. Once parsing is done, we introduce the following bottom-up pruning strategy. For each row starting at row r, we union all FSAs of the row and expand the unioned FSA through the recursive replacement operation. This yields the space of all generation hypotheses of length r. We then apply the language model to this lattice and reduce it under likelihood-based pruning at weight \u03b2. We then update each cell in the row with a new FSA obtained as the intersection of its original FSA and the pruned FSA. 1 This intersection may yield an empty FSA for a particular cell (meaning that all its hypotheses were pruned out of the row), but it will always leave at least one surviving FSA per row, guaranteeing that if parsing succeeds, the top-most cell will expand into a non-empty FSA. As we process higher rows, the replacement operation will yield smaller FSAs because some back-pointers will point to empty FSAs. In this way memory usage can be controlled through parameters r and \u03b2. Of course, when pruning in this way, the final output lattice L will not contain the complete space of hypotheses that could be generated by the grammar.",
"cite_spans": [
{
"start": 173,
"end": 196,
"text": "(Allauzen et al., 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 103,
"end": 111,
"text": "Figure 4",
"ref_id": "FIGREF6"
},
{
"start": 312,
"end": 320,
"text": "Figure 3",
"ref_id": "FIGREF5"
},
{
"start": 633,
"end": 641,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Generation from Exact Parsing",
"sec_num": "2.3"
},
{
"text": "The two generation algorithms presented above rely on a completed initial parsing step. However, given that the complexity of the parsing stage is O(2 N \u2022 K), this may not be achievable in practice. Leaving aside time considerations, the memory required to store 2 N FSAs will grow exponentially in N , even if the FSAs contain only pointers to other FSAs. Therefore we also describe an algorithm to perform bottom-up pruning guided by FULL-PARSE-EXPANSION Input: bag of words \u2126 of size N Input: list phrases L(\u2126) Input: word-based LM G Output: word lattice L of generated sequences Generate: the LM during parsing. The pseudocode is identical to that of Algorithm 1 except for the following changes: in parsing (Figure 2 ) we pass G as input and we call the row pruning function of Figure 4 after line 5 if x \u2265 r.",
"cite_spans": [],
"ref_spans": [
{
"start": 712,
"end": 721,
"text": "(Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 783,
"end": 791,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Algorithm 3: Pruned Parsing and Generation",
"sec_num": "2.4"
},
{
"text": "1 PARSE-BAG-OF-WORDS(\u2126) 2 for each row x = r, . . . , N \u2212 1 3 PRUNE-ROW(x) 4 F \u2190 FSA-REPLACE(FN,1) 5 return L \u2190 F \u2022 G 6 function PRUNE-ROW(x) : 7 F \u2190 y Fx,y 8 F \u2190 FSA-REPLACE(F ) 9 F \u2190 F \u2022 G 10 F \u2190 FSA-PRUNE(F, \u03b2) 11 for each cell y = 1, . . . , N x 12 Fx,y \u2190 Fx,y \u2022 F 13 return",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3: Pruned Parsing and Generation",
"sec_num": "2.4"
},
{
"text": "We note that there is a strong connection between GYRO and the IDL approach of Soricut and Marcu (2005; 2006) . Our bag of words parser could be cast in the IDL-formalism, and the FSA 'Replace' operation would be expressed by an IDL 'Unfold' operation. However, whereas their work applies pruning in the creation of the IDLexpression prior to LM application, GYRO uses unweighted phrase constraints so the LM must be considered for pruning while parsing.",
"cite_spans": [
{
"start": 79,
"end": 103,
"text": "Soricut and Marcu (2005;",
"ref_id": "BIBREF30"
},
{
"start": 104,
"end": 109,
"text": "2006)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3: Pruned Parsing and Generation",
"sec_num": "2.4"
},
{
"text": "We now report various experiments evaluating the performance of the generation approach described above. The system is evaluated using the MT08nw, and MT09-nw testsets. These correspond to the first English reference of the newswire portion of the Arabic-to-English NIST MT evaluation sets 2 . They contain 813 and 586 sentences respectively (53,325 tokens in total; average sentence length = 38.1 tokens after tokenization). In order to reduce the computational complexity, all sentences with more than 20 tokens were divided into sub-sentences, with 20 tokens being the upper limit. Between 70-80% of the sentences in the testsets were divided in this way. For each of these sentences we create a bag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "The GYRO system uses a n-gram LM estimated over 1.3 billion words of English text, including the AFP and Xinhua portions of the GigaWord corpus version 4 (1.1 billion words) and the English side of various Arabic-English parallel corpora typically used in MT evaluations (0.2 billion words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "Phrases of up to length 5 are extracted for each bag from a text collection containing 10.6 billion words of English news text. We use efficient Hadoop-based look-up techniques to carry out this extraction step and to retrieve rules for generation (Pino et al., 2012) . The average number of phrases extracted as a function of the size of the bag is shown in Figure 5 . These are the phrasebased rules of our generation grammar.",
"cite_spans": [
{
"start": 248,
"end": 267,
"text": "(Pino et al., 2012)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 359,
"end": 367,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "We analyze here the computational requirements of the three alternative GYRO algorithms presented in Sections 2.3 and 2.4. We carry out this analysis on a subset of 200 random subsentences from MT08-nw and MT09-nw chosen to have the same sentence length distribution as the whole data set. For a fixed generation grammar comprised of 3-gram, 4-gram and 5-gram rules only, we run each algorithm with a memory limitation of 20GB. If the process reaches this limit, then it is killed. Figure 6 reports the worst-case memory memory required by each algorithm as a function of the size of the bag.",
"cite_spans": [],
"ref_spans": [
{
"start": 482,
"end": 490,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computational Analysis",
"sec_num": "3.1"
},
{
"text": "As shown, Full Expansion (Algorithm 1) is only feasible for bags that contain at most 12 words. By contrast, Pruned Expansion (Algorithm 2) with \u03b2 = 10 is feasible for bags of up to 18 words. For Figure 6 : Worst-case memory required (GB) by each GYRO algorithm relative to the size of the bags.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computational Analysis",
"sec_num": "3.1"
},
{
"text": "bigger bags, the requirements of unpruned parsing make generation intractable under the memory limit. Finally, Pruned Parsing and Generation (Algorithm 3) is feasible at all bag sizes (up to 20 words), and its memory requirements can be controlled via the beam-width pruning parameter \u03b2. Harsher pruning (i.e. lower \u03b2) will incur more coverage problems, so it is desirable to use the highest feasible value of \u03b2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Analysis",
"sec_num": "3.1"
},
{
"text": "We emphasise that Algorithm 3, with suitable pruning strategies, can scale up to larger problems quite readily and generate output from much larger input sets than reported here. We focus here on generation quality for moderate sized problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Analysis",
"sec_num": "3.1"
},
{
"text": "We now compare the GYRO system with the Combinatory Categorial Grammar (CCG)-based system described in (Zhang et al., 2012) . By means of extracted CCG rules, the CCG system searches for an optimal parse guided by large-margin training. Each partial hypothesis (or 'edge') is scored using the syntax model and a 4gram LM trained similarly on one billion words of English Gigaword data. Both systems are evaluated using BLEU (Papineni et al., 2002; Espinosa et al., 2010) .",
"cite_spans": [
{
"start": 103,
"end": 123,
"text": "(Zhang et al., 2012)",
"ref_id": "BIBREF38"
},
{
"start": 424,
"end": 447,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF25"
},
{
"start": 448,
"end": 470,
"text": "Espinosa et al., 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Performance",
"sec_num": "3.2"
},
{
"text": "For GYRO, we use the pruned parsing algorithm of Section 2.4 with r = 6 and \u03b2 = 10 and a memory usage limit of 20G. The phrasebased rules of the grammar contain only 3-grams, 4-grams and 5-grams. 3 Under these conditions, GYRO finds an output for 91.4% of the bags. For the remainder, we obtain an output either by pruning less or by adding bigram rules (in 7.2% of the bags), or simply by adding all words as unigram rules (1.4% of the bags). Table 1 gives the results obtained by CCG and GYRO under a 3-gram or a 4-gram LM. Because GYRO outputs word lattices as opposed to a 1best hypothesis, we can reapply the same LM to the concatenated lattices of any sentences longer than 20 to take into account context in subsentence boundaries. This is the result in the third row in the Table, labeled 'GYRO +3g'. We can see that GYRO benefits significantly from this rescoring, beating the CCG system across both sets. This is possibly explained by the CCG system's dependence upon in-domain data that have been explicitly marked-up using the CCG formalism. The final row reports the positive impact of increasing the LM order to 4.",
"cite_spans": [
{
"start": 196,
"end": 197,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 444,
"end": 451,
"text": "Table 1",
"ref_id": null
},
{
"start": 782,
"end": 788,
"text": "Table,",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generation Performance",
"sec_num": "3.2"
},
{
"text": "Impact of generation grammar. To measure the benefits of using high-order n-grams as constraints for generation, we also ran GYRO with unigram rules only. This effectively does permutation under the LM with the pruning mechanisms described. The BLEU scores are 54.0 and 54.5 for MT08-nw and MT09 respectively. This indicates that a strong GYRO grammar is very much needed for this type of parsing and generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Performance",
"sec_num": "3.2"
},
{
"text": "Quality of generated lattices. We assess the quality of the lattices output by GYRO under the 4-gram LM by computing the oracle BLEU score of either the 100-best lists or the whole lattices 4 in the last two rows of Table 1 . In order to compute the latter, we use the linear approximation to BLEU that allows an efficient FST-based implementation of an Oracle search (Sokolov et al., 2012) . We draw two conclusions from these results: (a) that there is a significant potential for im-provement from rescoring, in that even for small 100-best lists the improvement found by the Oracle can exceed 10 BLEU points; and (b) that the output lattices are not perfect in that the Oracle score is not 100.",
"cite_spans": [
{
"start": 368,
"end": 390,
"text": "(Sokolov et al., 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generation Performance",
"sec_num": "3.2"
},
{
"text": "We now report on rescoring procedures intended to improve the first-pass lattices generated by GYRO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring GYRO output",
"sec_num": "3.2.1"
},
{
"text": "Higher-order language models. The first row in Table 2 reports the result obtained when applying a 5-gram LM to the GYRO lattices generated under a 4-gram. The 5-gram is estimated over the complete 10.6 billion word collection using the uniform backoff strategy of (Brants et al., 2007) . We find improvements of 3.0 and 1.9 BLEU with respect to the 4-gram baseline.",
"cite_spans": [
{
"start": 265,
"end": 286,
"text": "(Brants et al., 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rescoring GYRO output",
"sec_num": "3.2.1"
},
{
"text": "Dependency language models. We now investigate the benefits of applying a dependency LM (Shen et al., 2010) in a rescoring mode. We run the MALT dependency parser 5 on the generation hypotheses and rescore them according to log(p LM ) + \u03bb d log(p depLM ), i.e. a weighted combination of the word-based LM and the dependency LM scores. Since it is not possible to run the parser on the entire lattice, we carry out this experiment using the 100-best lists generated from the previous experiment ('+5g'). The dependency LM is a 3-gram estimated on the entire GigaWord version 5 collection (\u223c5 billion words). Results are shown in rows 2 and 3 in Table 2 , where in each row the performance over the set used to tune the parameter \u03bb d is marked with . In either case, we observe modest but consistent gains across both sets. We find this very promising considering that the parser has been applied to noisy input sentences.",
"cite_spans": [
{
"start": 88,
"end": 107,
"text": "(Shen et al., 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 644,
"end": 651,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rescoring GYRO output",
"sec_num": "3.2.1"
},
{
"text": "Minimum Bayes Risk Decoding. We also use Lattice-based Minimum Bayes Risk (LMBR) decoding (Tromble et al., 2008; Blackwood et al., 2010a) . Here, the posteriors over n-grams are computed over the output lattices generated by the GYRO system. The result is shown in row labeled '+5g +LMBR', where again we find modest but consistent gains across the two sets with respect to the 5-gram rescored lattices.",
"cite_spans": [
{
"start": 90,
"end": 112,
"text": "(Tromble et al., 2008;",
"ref_id": "BIBREF33"
},
{
"start": 113,
"end": 137,
"text": "Blackwood et al., 2010a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring GYRO output",
"sec_num": "3.2.1"
},
{
"text": "LMBR with MT posteriors. We investigate LMBR decoding when applying to the generation lattice a linear combination of the n-gram pos- Table 2 : Results in BLEU when rescoring the lattices generated by GYRO using various strategies. Tuning conditions are marked by .",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rescoring GYRO output",
"sec_num": "3.2.1"
},
{
"text": "terior probabilities extracted from (a) the same generation lattice, and (b) from lattices produced by an Arabic-to-English hierarchical-phrase based MT system developed for the NIST 2012 OpenMT Evaluation. As noted, LMBR relies on a posterior distribution over n-grams as part of its computation or risk. Here, we use LMBR with a posterior of the form \u03b1 p GYRO + (1-\u03b1) p MT . This is effectively performing a system combination between the GYRO generation system and the MT system (de Gispert et al., 2009; DeNero et al., 2010) but restricting the hypothesis space to be that of the GYRO lattice (Blackwood et al., 2010b) . Results are reported in the last two rows of Table 2 . Relative to 5-gram LM rescoring alone, we see gains in BLEU of 2.3 and 4.4 in MT08-nw and MT09nw, suggesting that posterior distributions over ngrams provided by SMT systems can give good guidance in generation. These results also suggest that if we knew what words to use, we could generate very good quality translation output. Figure 7 gives GYRO generation examples. These are often fairly fluent, and it is striking how the output can be improved with guidance from the SMT system. The examples also show the harshness of BLEU, e.g. 'german and turkish officials' is penalised with respect to ' turkish and german officials.' Metrics based on richer meaning representations, such as HyTER, could be valuable here (Dreyer and Marcu, 2012) . Figure 8 shows BLEU and Sentence Precision Rate (SPR), the percentage of exactly reconstructed sentences. As expected, performance is sensitive to length. For bags of up to 10, GYRO reconstructs the reference perfectly in over 65% of the cases. This is a harsh performance metric, and performance falls to less than 10% for bags of size 16-20. For bags of 6-10 words, we find BLEU scores of greater than 85. Performance is not as good for shorter segments, since these are often headlines and bylines that can be ambiguous in their ordering. The BLEU scores for bags of size 21 and higher are an artefact of our sentence splitting procedure. However, even for bag sizes of 16-to-20 GYRO has BLEU scores above 55.",
"cite_spans": [
{
"start": 482,
"end": 507,
"text": "(de Gispert et al., 2009;",
"ref_id": "BIBREF11"
},
{
"start": 508,
"end": 528,
"text": "DeNero et al., 2010)",
"ref_id": "BIBREF13"
},
{
"start": 597,
"end": 622,
"text": "(Blackwood et al., 2010b)",
"ref_id": "BIBREF7"
},
{
"start": 1398,
"end": 1422,
"text": "(Dreyer and Marcu, 2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 670,
"end": 677,
"text": "Table 2",
"ref_id": null
},
{
"start": 1010,
"end": 1018,
"text": "Figure 7",
"ref_id": null
},
{
"start": 1425,
"end": 1433,
"text": "Figure 8",
"ref_id": "FIGREF9"
}
],
"eq_spans": [],
"section": "Rescoring GYRO output",
"sec_num": "3.2.1"
},
{
"text": "Finally, the CCG and 4g-GYRO+5g systems were compared using crowd-sourced fluency judgements gathered on CrowdFlower. Judges were asked 'Please read the reference sentence and compare the fluency of items 1 & 2.' The test was a selection of 75 fluent sentences of 20 words or less taken from the MT dev sets. Each comparison was made by at least 3 judges. With an average selection confidence of 0.754, GYRO was preferred in 45 cases, CCG was preferred in 14 cases, and systems were tied 16 times. This is consistent with the significant difference in BLEU between these systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Assessments",
"sec_num": "3.4"
},
{
"text": "Our work is related to surface realisation within natural language generation (NLG). NLG typically assumes a relatively rich input representation intended to provide syntactic, semantic, and other relationships to guide generation. Example input representations are Abstract Meaning Representations (Langkilde and Knight, 1998) , attributevalue pairs (Ratnaparkhi, 2000) , lexical predicateargument structures (Bangalore and Rambow, 2000) , Interleave-Disjunction-Lock (IDL) expressions (Nederhof and Satta, 2004; Soricut and Marcu, 2005; Soricut and Marcu, 2006) , CCGbank derived grammars (White et al., 2007) , critics of bush 's iraq policy in a third of republican senator joins the list .",
"cite_spans": [
{
"start": 299,
"end": 327,
"text": "(Langkilde and Knight, 1998)",
"ref_id": "BIBREF21"
},
{
"start": 351,
"end": 370,
"text": "(Ratnaparkhi, 2000)",
"ref_id": "BIBREF27"
},
{
"start": 410,
"end": 438,
"text": "(Bangalore and Rambow, 2000)",
"ref_id": "BIBREF2"
},
{
"start": 487,
"end": 513,
"text": "(Nederhof and Satta, 2004;",
"ref_id": "BIBREF23"
},
{
"start": 514,
"end": 538,
"text": "Soricut and Marcu, 2005;",
"ref_id": "BIBREF30"
},
{
"start": 539,
"end": 563,
"text": "Soricut and Marcu, 2006)",
"ref_id": "BIBREF31"
},
{
"start": 591,
"end": 611,
"text": "(White et al., 2007)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work and Conclusion",
"sec_num": "4"
},
{
"text": "critics of bush 's policy in iraq joins the list of a third republican senator .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(b)",
"sec_num": "47.2"
},
{
"text": "critics of bush 's iraq policy in a list of republican senator joins the third .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(c)",
"sec_num": "69.8"
},
{
"text": "the list of critics of bush 's policy in iraq a third republican senator joins . 82.9 REF it added that these messages were sent to president bashar al-asad through turkish and german officials . (a-c) it added that president bashar al-asad through these messages were sent to german and turkish officials . 61.5 (d) it added that these messages were sent to president bashar al-asad through german and turkish officials . Figure 7: 4g GYRO (Table 2) meaning representation languages (Wong and Mooney, 2007) and unordered syntactic dependency trees (Guo et al., 2011; Bohnet et al., 2011; Belz et al., 2011; Belz et al., 2012) 6 . These input representations are suitable for applications such as dialog systems, where the system maintains the information needed to generate the input representation for NLG (Lemon, 2011) , or summarisation, where representations can be automatically extracted from coherent, well-formed text (Barzilay and Elhadad, 2011; Althaus et al., 2004) . However, there are other applications, such as automatic speech recognition and SMT that could possibly benefit from NLG, but which do not generate reliable linguistic annotation in their output. For these problems it would be useful to have systems, as described in this paper, which do not require rich input representations. We plan to investigate these applications in future work.",
"cite_spans": [
{
"start": 308,
"end": 316,
"text": "61.5 (d)",
"ref_id": null
},
{
"start": 484,
"end": 507,
"text": "(Wong and Mooney, 2007)",
"ref_id": "BIBREF36"
},
{
"start": 549,
"end": 567,
"text": "(Guo et al., 2011;",
"ref_id": "BIBREF18"
},
{
"start": 568,
"end": 588,
"text": "Bohnet et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 589,
"end": 607,
"text": "Belz et al., 2011;",
"ref_id": "BIBREF4"
},
{
"start": 608,
"end": 626,
"text": "Belz et al., 2012)",
"ref_id": "BIBREF5"
},
{
"start": 808,
"end": 821,
"text": "(Lemon, 2011)",
"ref_id": "BIBREF22"
},
{
"start": 927,
"end": 955,
"text": "(Barzilay and Elhadad, 2011;",
"ref_id": "BIBREF3"
},
{
"start": 956,
"end": 977,
"text": "Althaus et al., 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 441,
"end": 450,
"text": "(Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "(d)",
"sec_num": "39.1"
},
{
"text": "There is much opportunity for future development. To improve coverage, the grammars of Section 2.1 could perform generation with overlapping, rather than concatenated, n-grams; and features could be included to define tuneable loglinear rule probabilities (Och and Ney, 2002; Chiang, 2007) . The GYRO grammar could be extended using techniques from string-to-tree SMT, in particular by modifying the grammar so that output derivations respect dependencies (Shen et 6 Surface Realisation Task, Generation Challenges 2011, www.nltg.brighton.ac.uk/research/ genchal11 al., 2010); this will make it easier to integrate dependency LMs into GYRO. Finally, it would be interesting to couple the GYRO architecture with automata-based models of poetry and rhythmic text (Greene et al., 2010) .",
"cite_spans": [
{
"start": 256,
"end": 275,
"text": "(Och and Ney, 2002;",
"ref_id": "BIBREF24"
},
{
"start": 276,
"end": 289,
"text": "Chiang, 2007)",
"ref_id": "BIBREF10"
},
{
"start": 456,
"end": 466,
"text": "(Shen et 6",
"ref_id": null
},
{
"start": 761,
"end": 782,
"text": "(Greene et al., 2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(d)",
"sec_num": "39.1"
},
{
"text": "This step can be performed much more efficiently with a single forward pass of the resultant lattice. This is possible because the replace operation can yield a transducer where the input symbols encode a pointer to the original FSA, so in traversing the arcs of the pruned lattice, we know which arcs will belong to which cell FSAs. However, for ease of explanation we avoid this detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.itl.nist.gov/iad/mig/tests/mt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Any word in the bag that does not occur in the large collection of English material is added as a 1-gram rule.4 Obtained by pruning at \u03b2 = 10 in generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at www.maltparser.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7-ICT-2009-4) under grant agreement number 247762, the FAUST project faust-fp7.eu/faust/, and the EPSRC (UK) Programme Grant EP/I031022/1 (Natural Speech Technology) natural-speech-technology.org .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "OpenFst: A general and efficient weighted finite-state transducer library",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Schalkwyk",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of CIAA",
"volume": "",
"issue": "",
"pages": "11--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wo- jciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state trans- ducer library. In Proceedings of CIAA, pages 11-23, Prague, Czech Republic.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Computing locally coherent discourses",
"authors": [
{
"first": "Ernst",
"middle": [],
"last": "Althaus",
"suffix": ""
},
{
"first": "Nikiforos",
"middle": [],
"last": "Karamanis",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ernst Althaus, Nikiforos Karamanis, and Alexander Koller. 2004. Computing locally coherent dis- courses. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 399. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Exploiting a probabilistic hierarchical model for generation",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "42--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore and Owen Rambow. 2000. Ex- ploiting a probabilistic hierarchical model for gen- eration. In Proceedings of the 18th conference on Computational linguistics -Volume 1, COLING '00, pages 42-48, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "ferring strategies for sentence ordering in multidocument news summarization",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1106.1820"
]
},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Noemie Elhadad. 2011. In- ferring strategies for sentence ordering in multi- document news summarization. arXiv preprint arXiv:1106.1820.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The first surface realisation shared task: Overview and evaluation results",
"authors": [
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Dominic",
"middle": [],
"last": "Espinosa",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Kow",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "217--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anja Belz, Mike White, Dominic Espinosa, Eric Kow, Deirdre Hogan, and Amanda Stent. 2011. The first surface realisation shared task: Overview and eval- uation results. In Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 217-226, Nancy, France.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The surface realisation task: Recent developments and future plans",
"authors": [
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 7th International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "136--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anja Belz, Bernd Bohnet, Simon Mille, Leo Wanner, and Michael White. 2012. The surface realisation task: Recent developments and future plans. In Pro- ceedings of the 7th International Natural Language Generation Conference, pages 136-140, Utica, IL, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Efficient path counting transducers for minimum Bayes-risk decoding of statistical machine translation lattices",
"authors": [
{
"first": "Graeme",
"middle": [],
"last": "Blackwood",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL: Short Papers",
"volume": "",
"issue": "",
"pages": "27--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graeme Blackwood, Adri\u00e0 de Gispert, and William Byrne. 2010a. Efficient path counting transducers for minimum Bayes-risk decoding of statistical ma- chine translation lattices. In Proceedings of ACL: Short Papers, pages 27-32, Uppsala, Sweden.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fluency constraints for minimum Bayes-risk decoding of statistical machine translation lattices",
"authors": [
{
"first": "Graeme",
"middle": [],
"last": "Blackwood",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "71--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graeme Blackwood, Adri\u00e0 de Gispert, and William Byrne. 2010b. Fluency constraints for minimum Bayes-risk decoding of statistical machine transla- tion lattices. In Proceedings of COLING, pages 71- 79, Beijing, China.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "<StuMaBa>: From deep representation to surface",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Favre",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "232--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet, Simon Mille, Beno\u00eet Favre, and Leo Wanner. 2011. <StuMaBa>: From deep represen- tation to surface. In Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 232-235, Nancy, France.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Large language models in machine translation",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ashok",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "858--867",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of EMNLP-CoNLL, pages 858-867, Prague, Czech Re- public.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, 33(2):201-228.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Minimum Bayes risk combination of translation hypotheses from alternative morphological decompositions",
"authors": [
{
"first": "Sami",
"middle": [],
"last": "Adri\u00e0 De Gispert",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Kurimo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of HLT-NAACL: Short Papers",
"volume": "",
"issue": "",
"pages": "73--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adri\u00e0 de Gispert, Sami Virpioja, Mikko Kurimo, and William Byrne. 2009. Minimum Bayes risk com- bination of translation hypotheses from alternative morphological decompositions. In Proceedings of HLT-NAACL: Short Papers, pages 73-76, Boulder, CO, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hierarchical phrase-based translation with weighted finite-state transducers and shallow-n grammars",
"authors": [
{
"first": "Gonzalo",
"middle": [],
"last": "Adri\u00e0 De Gispert",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Iglesias",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [
"R"
],
"last": "Blackwood",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Banga",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "3",
"pages": "505--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adri\u00e0 de Gispert, Gonzalo Iglesias, Graeme Black- wood, Eduardo R. Banga, and William Byrne. 2010. Hierarchical phrase-based translation with weighted finite-state transducers and shallow-n grammars. Computational Linguistics, 36(3):505-533.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Model combination for machine translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of HTL-NAACL",
"volume": "",
"issue": "",
"pages": "975--983",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero, Shankar Kumar, Ciprian Chelba, and Franz Och. 2010. Model combination for machine translation. In Proceedings of HTL-NAACL, pages 975-983, Los Angeles, CA, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hyter: Meaning-equivalent semantics for translation evaluation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "162--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer and Daniel Marcu. 2012. Hyter: Meaning-equivalent semantics for translation eval- uation. In Proceedings of NAACL-HLT, pages 162- 171, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Further meta-evaluation of broad-coverage surface realization",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Espinosa",
"suffix": ""
},
{
"first": "Rajakrishnan",
"middle": [],
"last": "Rajkumar",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Shoshana",
"middle": [],
"last": "Berleant",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "564--574",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Espinosa, Rajakrishnan Rajkumar, Michael White, and Shoshana Berleant. 2010. Further meta-evaluation of broad-coverage surface realiza- tion. In Proceedings of EMNLP, pages 564-574, Cambridge, MA, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatically learning sourceside reordering rules for large scale machine translation",
"authors": [
{
"first": "Dmitriy",
"middle": [],
"last": "Genzel",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "376--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitriy Genzel. 2010. Automatically learning source- side reordering rules for large scale machine trans- lation. In Proceedings of COLING, pages 376-384, Beijing, China.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic analysis of rhythmic poetry with applications to generation and translation",
"authors": [
{
"first": "Erica",
"middle": [],
"last": "Greene",
"suffix": ""
},
{
"first": "Tugba",
"middle": [],
"last": "Bodrumlu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "524--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erica Greene, Tugba Bodrumlu, and Kevin Knight. 2010. Automatic analysis of rhythmic poetry with applications to generation and translation. In Pro- ceedings of EMNLP, pages 524-533, Cambridge, MA, USA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dependency-based n-gram models for general purpose sentence realisation",
"authors": [
{
"first": "Yuqing",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2011,
"venue": "Natural Language Engineering",
"volume": "17",
"issue": "04",
"pages": "455--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuqing Guo, Josef Van Genabith, and Haifeng Wang. 2011. Dependency-based n-gram models for gen- eral purpose sentence realisation. Natural Language Engineering, 17(04):455-483.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Hierarchical phrase-based translation representations",
"authors": [
{
"first": "Gonzalo",
"middle": [],
"last": "Iglesias",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1373--1383",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gonzalo Iglesias, Cyril Allauzen, William Byrne, Adri\u00e0 de Gispert, and Michael Riley. 2011. Hi- erarchical phrase-based translation representations. In Proceedings of EMNLP, pages 1373-1383, Edin- burgh, Scotland, UK.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Decoding complexity in wordreplacement translation models",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "4",
"pages": "607--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight. 1999. Decoding complexity in word- replacement translation models. Computational Linguistics, 25(4):607-615.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Generation that exploits corpus-based statistical knowledge",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Langkilde",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of ACL/COLING",
"volume": "",
"issue": "",
"pages": "704--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Langkilde and Kevin Knight. 1998. Gener- ation that exploits corpus-based statistical knowl- edge. In Proceedings of ACL/COLING, pages 704- 710, Montreal, Quebec, Canada.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning what to say and how to say it: Joint optimisation of spoken dialogue management and natural language generation",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2011,
"venue": "Computer Speech & Language",
"volume": "25",
"issue": "2",
"pages": "210--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Lemon. 2011. Learning what to say and how to say it: Joint optimisation of spoken dialogue man- agement and natural language generation. Com- puter Speech & Language, 25(2):210-221.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "IDLexpressions: A formalism for representing and parsing finite languages in natural language processing",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Nederhof",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Artificial Intelligence Research",
"volume": "21",
"issue": "",
"pages": "287--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark-Jan Nederhof and Giorgio Satta. 2004. IDL- expressions: A formalism for representing and pars- ing finite languages in natural language processing. Journal of Artificial Intelligence Research, 21:287- 317.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimina- tive training and maximum entropy models for sta- tistical machine translation. In Proceedings of ACL, pages 295-302, Philadelphia, PA, USA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311-318, Philadelphia, PA, USA.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Simple and efficient model filtering in statistical machine translation",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Aurelien",
"middle": [],
"last": "Waite",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2012,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "98",
"issue": "",
"pages": "5--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Pino, Aurelien Waite, and William Byrne. 2012. Simple and efficient model filtering in statistical ma- chine translation. The Prague Bulletin of Mathemat- ical Linguistics, 98:5-24.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Trainable methods for surface natural language generation",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "194--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 2000. Trainable methods for sur- face natural language generation. In Proceedings of NAACL, pages 194-201, Seattle, WA, USA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "String-to-dependency statistical machine translation",
"authors": [
{
"first": "Libin",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jinxi",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "4",
"pages": "649--671",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Libin Shen, Jinxi Xu, and Ralph Weischedel. 2010. String-to-dependency statistical machine transla- tion. Computational Linguistics, 36(4):649-671.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Computing lattice bleu oracle scores for machine translation",
"authors": [
{
"first": "Artem",
"middle": [],
"last": "Sokolov",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wisniewski",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "120--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artem Sokolov, Guillaume Wisniewski, and Francois Yvon. 2012. Computing lattice bleu oracle scores for machine translation. In Proceedings of EACL, pages 120-129, Avignon, France.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Towards developing generation algorithms for text-to-text applications",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "66--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Soricut and Daniel Marcu. 2005. Towards devel- oping generation algorithms for text-to-text applica- tions. In Proceedings of ACL, pages 66-74, Ann Arbor, MI, USA.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Stochastic Language Generation Using WIDL-Expressions and its Application in Machine Translation and Summarization",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1105--1112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Soricut and Daniel Marcu. 2006. Stochastic Language Generation Using WIDL-Expressions and its Application in Machine Translation and Summa- rization. In Proceedings of ACL, pages 1105-1112, Sydney, Australia.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Learning linear ordering problems for better translation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1007--1016",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Tromble and Jason Eisner. 2009. Learning linear ordering problems for better translation. In Proceed- ings of EMNLP, pages 1007-1016, Singapore.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Lattice Minimum Bayes-Risk decoding for statistical machine translation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "620--629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Tromble, Shankar Kumar, Franz Och, and Wolf- gang Macherey. 2008. Lattice Minimum Bayes- Risk decoding for statistical machine translation. In Proceedings of EMNLP, pages 620-629, Honolulu, Hawaii, USA.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Improving grammaticality in statistical sentence generation: Introducing a dependency spanning tree algorithm with an argument satisfaction model",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "C\u00e9cile",
"middle": [],
"last": "Paris",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "852--860",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Wan, Mark Dras, Robert Dale, and C\u00e9cile Paris. 2009. Improving grammaticality in statisti- cal sentence generation: Introducing a dependency spanning tree algorithm with an argument satisfac- tion model. In Proceedings of EACL, pages 852- 860, Athens, Greece.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Towards broad coverage surface realization with ccg",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Rajakrishnan",
"middle": [],
"last": "Rajkumar",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of the Workshop on Using Corpora for NLG: Language Generation and Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael White, Rajakrishnan Rajkumar, and Scott Martin. 2007. Towards broad coverage surface real- ization with ccg. In Proc. of the Workshop on Using Corpora for NLG: Language Generation and Ma- chine Translation (UCNLG+ MT).",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Generation by inverting a semantic parser that uses statistical machine translation",
"authors": [
{
"first": "Yuk",
"middle": [
"Wah"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Raymond J",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Human Language Technologies: The Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT-07)",
"volume": "",
"issue": "",
"pages": "172--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuk Wah Wong and Raymond J Mooney. 2007. Gen- eration by inverting a semantic parser that uses sta- tistical machine translation. Proceedings of Hu- man Language Technologies: The Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT-07), pages 172-179.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Syntaxbased Grammaticality Improvement using CCG and Guided Search",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1147--1157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2011. Syntax- based Grammaticality Improvement using CCG and Guided Search. In Proceedings of EMNLP, pages 1147-1157, Edinburgh, Scotland, U.K.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Syntax-based word ordering incorporating a large-scale language model",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Blackwood",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "736--746",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang, Graeme Blackwood, and Stephen Clark. 2012. Syntax-based word ordering incorporating a large-scale language model. In Proceedings of EACL, pages 736-746, Avignon, France.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "eration from {a, b, c, d, e} with phrases {\"a b\", \"b a\", \"d e c\"}.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Hence M [X(I), Y (I)] is the cell associated with bit string I. In the inverse direction, we using the notation I x,y to indicate a bit string associated with the cell M [x, y].",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "If a rule is found in cell M [N, 1], there is a parse (line 11); otherwise none exists. The complexity of the algorithm is O(2 N \u2022 K). If back-pointers are kept, traversing these from cell M [N, 1] yields all the generated word sequences.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Parsing algorithm for a bag of words.",
"uris": null
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"text": "RTN representing generation from {a, b, c, d, e} with phrases {\"a b\", \"b a\", \"d e c\"} (top) and its expansion as an FSA (bottom).",
"uris": null
},
"FIGREF6": {
"num": null,
"type_str": "figure",
"text": "Pseudocode for Algorithm 1 (excluding lines 2-3) and Algorithm 2 (including all lines).",
"uris": null
},
"FIGREF7": {
"num": null,
"type_str": "figure",
"text": "Average number of extracted phrases as a function of the bag of word size.",
"uris": null
},
"FIGREF9": {
"num": null,
"type_str": "figure",
"text": "GYRO BLEU score and Sentence Precision Rate as a function of the bag of words size. Computed on the concatenation of MT08-nw and MT09-nw.",
"uris": null
},
"FIGREF10": {
"num": null,
"type_str": "figure",
"text": "republican senator joins the list of critics of bush 's policy in iraq . (a)",
"uris": null
},
"FIGREF11": {
"num": null,
"type_str": "figure",
"text": "output examples, with sentence level BLEU: (a) GYRO+4g; (b) GYRO+5g; (c) GYRO+5g+LMBR; (d) GYRO+5g+LMBR-mt. (a-c) indicates systems with identical hypotheses.",
"uris": null
}
}
}
}