ACL-OCL / Base_JSON /prefixC /json /C08 /C08-1038.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C08-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:26:16.626485Z"
},
"title": "Dependency-Based N-Gram Models for General Purpose Sentence Realisation",
"authors": [
{
"first": "Yuqing",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "City University",
"location": {
"settlement": "Dublin 9",
"country": "Ireland"
}
},
"email": "yguo@computing.dcu.ie"
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM CAS",
"location": {
"settlement": "Dublin",
"country": "Ireland"
}
},
"email": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "wanghaifeng@rdc.toshiba.com.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present dependency-based n-gram models for general-purpose, widecoverage, probabilistic sentence realisation. Our method linearises unordered dependencies in input representations directly rather than via the application of grammar rules, as in traditional chartbased generators. The method is simple, efficient, and achieves competitive accuracy and complete coverage on standard English (Penn-II, 0.7440 BLEU, 0.05 sec/sent) and Chinese (CTB6, 0.7123 BLEU, 0.14 sec/sent) test data.",
"pdf_parse": {
"paper_id": "C08-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "We present dependency-based n-gram models for general-purpose, widecoverage, probabilistic sentence realisation. Our method linearises unordered dependencies in input representations directly rather than via the application of grammar rules, as in traditional chartbased generators. The method is simple, efficient, and achieves competitive accuracy and complete coverage on standard English (Penn-II, 0.7440 BLEU, 0.05 sec/sent) and Chinese (CTB6, 0.7123 BLEU, 0.14 sec/sent) test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentence generation, 1 or surface realisation can be described as the problem of producing syntactically, morphologically, and orthographically correct sentences from a given semantic or syntactic representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most general-purpose realisation systems developed to date transform the input into surface form via the application of a set of grammar rules based on particular linguistic theories, e.g. Lexical Functional Grammar (LFG), Head-Driven Phrase Structure Grammar (HPSG), Combinatory Categorial Grammar (CCG), Tree Adjoining Grammar (TAG) etc. These grammar rules are either carefully handcrafted, as those used in FUF/SURGE (Elhadad, 1991) , LKB (Carroll et al., c 2008. Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved.",
"cite_spans": [
{
"start": 421,
"end": 436,
"text": "(Elhadad, 1991)",
"ref_id": "BIBREF8"
},
{
"start": 443,
"end": 467,
"text": "(Carroll et al., c 2008.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 In this paper, the term \"generation\" is used generally for what is more strictly referred to by the term \"tactical generation\" or \"surface realisation\". 1999), OpenCCG (White, 2004) and XLE (Crouch et al., 2007) , or created semi-automatically (Belz, 2007) , or fully automatically extracted from annotated corpora, like the HPSG (Nakanishi et al., 2005) , LFG (Cahill and van Genabith, 2006; Hogan et al., 2007) and CCG (White et al., 2007) resources derived from the Penn-II Treebank (PTB) (Marcus et al., 1993) .",
"cite_spans": [
{
"start": 170,
"end": 183,
"text": "(White, 2004)",
"ref_id": "BIBREF23"
},
{
"start": 192,
"end": 213,
"text": "(Crouch et al., 2007)",
"ref_id": null
},
{
"start": 246,
"end": 258,
"text": "(Belz, 2007)",
"ref_id": "BIBREF1"
},
{
"start": 332,
"end": 356,
"text": "(Nakanishi et al., 2005)",
"ref_id": "BIBREF16"
},
{
"start": 363,
"end": 394,
"text": "(Cahill and van Genabith, 2006;",
"ref_id": "BIBREF3"
},
{
"start": 395,
"end": 414,
"text": "Hogan et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 423,
"end": 443,
"text": "(White et al., 2007)",
"ref_id": "BIBREF24"
},
{
"start": 494,
"end": 515,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Over the last decade, probabilistic models have become widely used in the field of natural language generation (NLG), often in the form of a realisation ranker in a two-stage generation architecture. The two-stage methodology is characterised by a separation between generation and selection, in which rule-based methods are used to generate a space of possible paraphrases, and statistical methods are used to select the most likely realisation from the space. By and large, two statistical models are used in the rankers to choose output strings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 N-gram language models over different units, such as word-level bigram/trigram models (Bangalore and Rambow, 2000; Langkilde, 2000) , or factored language models integrated with syntactic tags (White et al., 2007) .",
"cite_spans": [
{
"start": 88,
"end": 116,
"text": "(Bangalore and Rambow, 2000;",
"ref_id": "BIBREF0"
},
{
"start": 117,
"end": 133,
"text": "Langkilde, 2000)",
"ref_id": "BIBREF13"
},
{
"start": 195,
"end": 215,
"text": "(White et al., 2007)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Log-linear models with different syntactic and semantic features (Velldal and Oepen, 2005; Nakanishi et al., 2005; Cahill et al., 2007) .",
"cite_spans": [
{
"start": 67,
"end": 92,
"text": "(Velldal and Oepen, 2005;",
"ref_id": "BIBREF22"
},
{
"start": 93,
"end": 116,
"text": "Nakanishi et al., 2005;",
"ref_id": "BIBREF16"
},
{
"start": 117,
"end": 137,
"text": "Cahill et al., 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To date, however, probabilistic models learning direct mappings from generation input to surface strings, without the effort to construct a grammar, have rarely been explored. An exception is Ratnaparkhi (2000) , who presents maximum entropy models to learn attribute ordering and lexical choice for sentence generation from a semantic representation of attribute-value pairs, restricted to an air travel domain. In this paper, we develop an efficient, widecoverage generator based on simple n-gram models to directly linearise dependency relations from the input representations. Our work is aimed at general-purpose sentence generation but couched in the framework of Lexical Functional Grammar. We give an overview of LFG and the dependency representations we use in Section 2. We describe the general idea of our dependency-based generation in Section 3 and give details of the n-gram generation models in Section 4. Section 5 explains the experiments and provides results for both English and Chinese data. Section 6 compares the results with previous work and between languages. Finally we conclude with a summary and outline future work.",
"cite_spans": [
{
"start": 192,
"end": 210,
"text": "Ratnaparkhi (2000)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 PRED 'believe' TENSE pres SUBJ f2 \uf8ee \uf8ef \uf8f0 PRED 'pro' PERS 1 NUM pl \uf8f9 \uf8fa \uf8fb OBL f3 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 PFORM 'in' OBJ f4 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 PRED 'law' PERS 3 NUM sg SPEC f5 DET f6 PRED 'the' ADJ \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 f7 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 PFORM 'of' OBJ f8 \uf8ee \uf8ef \uf8f0 PRED 'average' PERS 3 NUM pl \uf8f9 \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (a.) c-structure (b.) f-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lexical Functional Grammar (Kaplan and Bresnan, 1982 ) is a constraint-based grammar formalism which postulates (minimally) two levels of representation: c(onstituent)-structure and f(unctional)-structure. As illustrated in Figure 1 , a c-structure is a conventional phrase structure tree and captures surface grammatical configurations. The f-structure encodes more abstract functional relations like SUBJ(ect), OBJ(ect) and ADJ(unct) . F-structures are hierarchical attribute-value matrix representations of bilexical labelled dependencies, approximating to basic predicateargument/adjunct structures. 2 Attributes in fstructure come in two different types:",
"cite_spans": [
{
"start": 27,
"end": 52,
"text": "(Kaplan and Bresnan, 1982",
"ref_id": "BIBREF11"
},
{
"start": 413,
"end": 421,
"text": "OBJ(ect)",
"ref_id": null
},
{
"start": 426,
"end": 435,
"text": "ADJ(unct)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 224,
"end": 232,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Lexical Functional Grammar",
"sec_num": "2.1"
},
{
"text": "\u2022 Grammatical Functions (GFs) indicate the relationship between the predicate and dependents. GFs can be divided into: -arguments are subcategorised for by the predicate, such as SUBJ(ect), OBJ(ect), and thus can only occur once in each local f-structure. -modifiers like ADJ(unct), COORD(inate) are not subcategorised for by the predicate, and can occur any number of times in a local f-structure.",
"cite_spans": [
{
"start": 283,
"end": 295,
"text": "COORD(inate)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Functional Grammar",
"sec_num": "2.1"
},
{
"text": "\u2022 Atomic-valued features describe linguistic properties of the predicate, such as TENSE, ASPECT, MOOD, PERS, NUM, CASE etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Functional Grammar",
"sec_num": "2.1"
},
{
"text": "Work on generation in LFG generally assumes that the generation task is to determine the set of strings of the language that corresponds to a specified fstructure, given a particular grammar (Kaplan and Wedekind, 2000) . Previous work on generation within LFG includes the XLE, 3 Cahill and van Genabith (2006) , Hogan et al. (2007) and . The XLE generates sentences from fstructures according to parallel handcrafted grammars for English, French, German, Norwegian, Japanese, and Urdu. Based on the German XLE resources, Cahill et al. (2007) describe a two-stage, log-linear generation model. Cahill and van Genabith (2006) and Hogan et al. (2007) present a chart generator using wide-coverage PCFG-based LFG approximations automatically acquired from treebanks (Cahill et al., 2004) .",
"cite_spans": [
{
"start": 191,
"end": 218,
"text": "(Kaplan and Wedekind, 2000)",
"ref_id": "BIBREF12"
},
{
"start": 278,
"end": 279,
"text": "3",
"ref_id": null
},
{
"start": 280,
"end": 310,
"text": "Cahill and van Genabith (2006)",
"ref_id": "BIBREF3"
},
{
"start": 313,
"end": 332,
"text": "Hogan et al. (2007)",
"ref_id": "BIBREF10"
},
{
"start": 522,
"end": 542,
"text": "Cahill et al. (2007)",
"ref_id": "BIBREF4"
},
{
"start": 594,
"end": 624,
"text": "Cahill and van Genabith (2006)",
"ref_id": "BIBREF3"
},
{
"start": 629,
"end": 648,
"text": "Hogan et al. (2007)",
"ref_id": "BIBREF10"
},
{
"start": 763,
"end": 784,
"text": "(Cahill et al., 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation from F-Structures",
"sec_num": "2.2"
},
{
"text": "Traditional LFG generation models can be regarded as the reverse process of parsing, and use bi-directional f-structure-annotated CFG rules. In a sense, the generation process is driven by an input dependency (or f-structure) representation, but proceeds through the \"detour\" of using dependency-annotated CFG (or PCFG) grammars and chart-based generators. In this paper, we develop a simple n-gram and dependencybased, wide-coverage, robust, probabilistic generation model, which cuts out the middle-man from previous approaches: the CFG-component. Our approach is data-driven: following the methodology in (Cahill et al., 2004; Guo et al., 2007) , we automatically convert the English Penn-II treebank and the Chinese Penn Treebank (Xue et al., 2005) into f-structure banks. F-structures such as Figure 1 (b.) are unordered, i.e. they do not carry information on to the relative surface order of local GFs. In order to generate a string from an f-structure, we need to linearise the GFs (at each level of embedding) in the f-structure (and map lemmas and features to surface forms). We do this in terms of n-gram models over GFs. In order to build the n-gram models, we linearise the fstructures automatically produced from treebanks by associating the numerical string position (word offset from start of the sentence) with the predicate in each local f-structure, producing GF sequences as in Figure 1 (c.).",
"cite_spans": [
{
"start": 608,
"end": 629,
"text": "(Cahill et al., 2004;",
"ref_id": "BIBREF2"
},
{
"start": 630,
"end": 647,
"text": "Guo et al., 2007)",
"ref_id": "BIBREF9"
},
{
"start": 734,
"end": 752,
"text": "(Xue et al., 2005)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 798,
"end": 806,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 1397,
"end": 1405,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dependency-Based Generation: the Basic Idea",
"sec_num": "3"
},
{
"text": "Even though the n-gram models are exemplified using LFG f-structures, they are general-purpose models and thus suitable for any bilexical labelled dependency (Nivre, 2006) or predicate-argument type representations, such as the labelled feature-",
"cite_spans": [
{
"start": 158,
"end": 171,
"text": "(Nivre, 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-Based Generation: the Basic Idea",
"sec_num": "3"
},
{
"text": "The primary task of a sentence generator is to determine the linear order of constituents and words, represented as lemmas in predicates in f-structures. At a particular local f-structure, the task of generating a string covered by the local f-structure is equivalent to linearising all the GFs present at that local f-structure. E.g. in f 4 in Figure 1 , the unordered set of local GFs {SPEC, PRED, ADJ} generates the surface sequence \"the law of averages\". We linearise the GFs in the set by computing n-gram models, similar to traditional wordbased language models, except using the names of GFs (including PRED) instead of words. Given a (sub-) f-structure F containing m GFs, the ngram model searches for the best surface sequence S m 1 =s 1 ...s m generated by the GF linearisation GF m 1 = GF 1 ...GF m , which maximises the probability P (GF m 1 ). Using n-gram models, P (GF m 1 ) is calculated according to Eq.(1).",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 353,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Basic N-Gram Model",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (GF m 1 ) = P (GF1...GFm) = m k=1 P (GF k |GF k\u22121 k\u2212n+1 )",
"eq_num": "(1)"
}
],
"section": "Basic N-Gram Model",
"sec_num": "4.1"
},
{
"text": "In addition to the basic n-gram model over bare GFs, we integrate contextual and fine-grained lexical information into several factored models. Eq.(2) additionally conditions the probability of the n-gram on the parent GF label of the current local f-structure f i , Eq.(3) on the instantiated PRED of the local f-structure f i , and Eq.(4) lexicalises the model, where each GF is augmented with its own predicate lemma.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factored N-Gram Models",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P g (GF m 1 ) = m k=1 P (GF k |GF k\u22121 k\u2212n+1 , GFi) (2) P p (GF m 1 ) = m k=1 P (GF k |GF k\u22121 k\u2212n+1 , P redi)",
"eq_num": "(3)"
}
],
"section": "Factored N-Gram Models",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P l (GF m 1 ) = m k=1 P (Lex k |Lex k\u22121 k\u2212n+1 )",
"eq_num": "(4)"
}
],
"section": "Factored N-Gram Models",
"sec_num": "4.2"
},
{
"text": "To avoid data sparseness, the factored n-gram models P f are smoothed by linearly interpolating the basic n-gram model P , as in Eq.(5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factored N-Gram Models",
"sec_num": "4.2"
},
{
"text": "P f (GF m 1 ) = \u03bbP f (GF m 1 ) + (1 \u2212 \u03bb)P (GF m 1 ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factored N-Gram Models",
"sec_num": "4.2"
},
{
"text": "Additionally, the lexicalised n-gram models P l are combined with the other two models conditioned on the additional parent GF P g and PRED P p , as shown in Eqs. 6) & (7, respectively. Table 1 exemplifies the different n-gram models for the local f-structure f 4 in Figure 1 . Figure 1 Besides grammatical functions, we also make use of atomic-valued features like TENSE, PERS, NUM (etc.) to aid linearisation. The attributes and values of these features are integrated into the GF n-grams for disambiguation (see Section 5.2).",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 193,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 267,
"end": 275,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 278,
"end": 286,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Factored N-Gram Models",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P lg (GF m 1 ) = \u03bb1P l (GF m 1 ) + \u03bb2P g (GF m 1 ) +\u03bb3P (GF m 1 )",
"eq_num": "(6)"
}
],
"section": "Factored N-Gram Models",
"sec_num": "4.2"
},
{
"text": "P lp (GF m 1 ) = \u03bb1P l (GF m 1 ) + \u03bb2P p (GF m 1 ) +\u03bb3P (GF m 1 ) (7) where \u03bbi = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factored N-Gram Models",
"sec_num": "4.2"
},
{
"text": "Model N-grams Cond. basic (P ) SPEC PRED ADJ gf (P g ) SPEC PRED ADJ OBL pred (P p ) SPEC PRED ADJ 'law' lex (P l ) SPEC PRED['law'] ADJ['of']",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factored N-Gram Models",
"sec_num": "4.2"
},
{
"text": "Our basic n-gram based generation model implements the simplifying assumption that linearisation at one sub-f-structure is independent of linearisation at any other sub-f-structures. This assumption is feasible for projective dependencies. In most cases (at least in English and Chinese), non-projective dependencies are only used to account for Long-Distance Dependencies (LDDs). Consider sentence (1) discussed in Carroll et al. (1999) and its corresponding fstructure in Figure 2 . In LFG f-structures, LDDs are represented via reentrancies between \"dislocated\" TOPIC, TOPIC REL, FOCUS (etc.) GFs and \"source\" GFs subcategorised for by local predicates, but only the dislocated GFs are instantiated in generation. Therefore traces of the source GFs in input f-structures are removed before generation, and non-projective dependencies are transformed into simple projective dependencies.",
"cite_spans": [
{
"start": 416,
"end": 437,
"text": "Carroll et al. (1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 474,
"end": 482,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generation Algorithm",
"sec_num": "4.3"
},
{
"text": "(1) How quickly did the newspapers say the athlete ran?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Algorithm",
"sec_num": "4.3"
},
{
"text": "\uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 FOCUS \uf8ee \uf8ef \uf8f0 PRED 'quickly' ADJ PRED 'how' \uf8f9 \uf8fa \uf8fb 1 PRED 'say' SUBJ \uf8ee \uf8f0 PRED 'newspaper' SPEC PRED 'the' \uf8f9 \uf8fb COMP \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 PRED 'run' SUBJ \uf8ee \uf8f0 PRED 'athlete' SPEC PRED 'the' \uf8f9 \uf8fb ADJ 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Algorithm",
"sec_num": "4.3"
},
{
"text": "Figure 2: schematic f-structure for How quickly did the newspapers say the athlete ran?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Algorithm",
"sec_num": "4.3"
},
{
"text": "In summary, given an input f-structure f , the core algorithm of the generator recursively traverses f and at each sub-f-structure f i : ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Algorithm",
"sec_num": "4.3"
},
{
"text": "To test the performance and coverage of our ngram-based generation models, experiments are carried out for both English and Chinese, two languages with distinct properties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "5"
},
{
"text": "Experiments on English data are carried out on the WSJ portion of the PTB, using standard training/test/development splits, viz 39,832 sentences from sections 02-21 are used for training, 2,416 sentences from section 23 for testing, while 1,700 sentences from section 22 are held out for development. The latest version of the Penn Chinese Treebank 6.0 (CTB6), excluding the portion of ACE broadcast news, is used for experiments on Chinese data. 4 We follow the recommended splits (in the list-of-file of CTB6) to divide the data into test set, development set and training set. The n-gram models are created using the SRILM toolkit (Stolcke, 2002) with Good-Turning smoothing for both the Chinese and English data. For morphological realisation of English, a set of lexical macros is automatically extracted from the training data. This is not required for Chinese surface realisation as Chinese has very little morphology. Lexical macro examples are listed in Table 3. lexical macro surface word pred=law, num=sg, pers=3 law pred=average, num=pl, pers=3 averages pred=believe, num=pl, tense=pres believe Table 3 : Examples of lexical macros",
"cite_spans": [
{
"start": 447,
"end": 448,
"text": "4",
"ref_id": null
},
{
"start": 634,
"end": 649,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 963,
"end": 971,
"text": "Table 3.",
"ref_id": null
},
{
"start": 1107,
"end": 1114,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Design",
"sec_num": "5.1"
},
{
"text": "The input to our generator are unordered fstructures automatically derived from the development and test set trees of our treebanks, which do not contain any string position information. But, due to the particulars of the automatic f-structure annotation algorithm, the order of sub-f-structures in set-valued GFs, such as ADJ, COORD, happens to correspond to their surface order. To avoid unfairly inflating evaluation results, we lexically reorder the GFs in each sub-f-structure of the development and test input before the generation process. This resembles the \"permute, no dir\" type experiment in (Langkilde, 2002) .",
"cite_spans": [
{
"start": 603,
"end": 620,
"text": "(Langkilde, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Design",
"sec_num": "5.1"
},
{
"text": "Following (Langkilde, 2002) and other work on general-purpose generators, BLEU score (Papineni et al., 2002) , average NIST simple string accuracy (SSA) and percentage of exactly matched sentences are adopted as evaluation metrics. As our system guarantees that all input fstructures can generate a complete sentence, special coverage-dependent evaluation (as has been adopted in most grammar-based generation systems) is not necessary in our experiments.",
"cite_spans": [
{
"start": 10,
"end": 27,
"text": "(Langkilde, 2002)",
"ref_id": "BIBREF14"
},
{
"start": 85,
"end": 108,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.2"
},
{
"text": "Experiments are carried out on an Intel Pentium 4 server, with a 3.80GHz CPU and 3GB memory. It takes less than 2 minutes to generate all 2,416 sentences (with average sentence length of 21 words) of WSJ section 23 (average 0.05 sec per sentence), and approximately 4 minutes to generate 1,708 sentences (with average sentence length of 30 words) of CTB test data (average 0.14 sec per sentence), using 4-gram models in all experiments. Our evaluation results for English and Chinese data are shown in Tables 4 and 5, respectively. Different n-gram models perform nearly consistently in all the experiments on both English and Chinese data. The results show that factored ngram models outperform the basic n-gram models, and in turn the combined n-gram models outperform single n-gram models. The combined model interpolating n-grams over lexicalised GFs with ngrams conditioned on PRED achieves the best results in both experiments on English (with feature names) and Chinese (with feature names & values), with BLEU scores of 0.7440 and 0.7123 respectively, and full coverage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.2"
},
{
"text": "Lexicalisation plays an important role in both English and Chinese, boosting the BLEU score without features from 0.5074 to 0.6741 for English, and from 0.5752 to 0.6639 for Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.2"
},
{
"text": "Atomic-valued features play an important role in English, and boost the BLEU score from 0.5074 in the baseline model to 0.6842 when feature names are integrated into the n-gram models. However, feature names in Chinese only increase the BLEU score from 0.5752 to 0.6160. This is likely to be the case as English has a richer morphology than Chinese, and important function words such as 'if', 'to', 'that' are encoded in atomic-valued features in English f-structures, which helps to determine string order. However, combined feature names and values work better on Chinese data, but turn out to hurt the n-gram model performance for English data. This may suggest that the feature names in English already include enough information, while the value of morphological features, such as TENSE, NUM does not provide any new information to help determine word order, but aggravate data sparseness instead. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.2"
},
{
"text": "It is very difficult to compare sentence generators since the information contained in the input representation varies greatly between systems. The most direct comparison is between our system and those presented in Cahill and van Genabith (2006) and Hogan et al. (2007) , as they also use treebankbased automatically generated f-structures as the generator inputs. The labelled feature-value structures used in HALogen (Langkilde, 2002) and functional descriptions in FUF/SURGE (Callaway, 2003) also bear some broad similarities to our fstructures. A number of systems using different input but adopting the same evaluation metrics and testing on the same data are listed in Table 6 . Surprisingly (or not), the best results are achieved by a purely symbolic generation system-FUF/SURGE (Callaway, 2003) . However the approach uses handcrafted grammars which are very time-consuming to produce and adapt to different languages and domains. Langkilde (2002) reports results for experiments with varying levels of linguistic detail in the input given to the generator. The type \"permute, no dir\" is most comparable to the level of information contained in our f-structure in that the modifiers (adjuncts, coordinates etc.) in the input are not ordered.",
"cite_spans": [
{
"start": 216,
"end": 246,
"text": "Cahill and van Genabith (2006)",
"ref_id": "BIBREF3"
},
{
"start": 251,
"end": 270,
"text": "Hogan et al. (2007)",
"ref_id": "BIBREF10"
},
{
"start": 420,
"end": 437,
"text": "(Langkilde, 2002)",
"ref_id": "BIBREF14"
},
{
"start": 479,
"end": 495,
"text": "(Callaway, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 788,
"end": 804,
"text": "(Callaway, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 941,
"end": 957,
"text": "Langkilde (2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 676,
"end": 683,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Comparison to Previous Work",
"sec_num": "6.1"
},
{
"text": "However her labelled feature-value structure is more specific than our f-structure as it also includes syntactic properties such as part-of-speech, which might contribute to the higher BLEU score of HALogen. And moreover, in HALogen nearly 20% of the sentences are only partially generated (or not at all). Nakanishi et al. (2005) carry out experiments on sentences up to 20 words, with BLEU scores slightly higher than ours. However their results without sentence length limitation (listed in the right column), for 500 sentences randomly selected from WSJ Sec22 are lower than ours, even at a lower coverage. Overall our system is competitive, with best results for coverage (100%), second best for BLEU and SSA scores, and third best overall on exact match. However, we admit that automatic metrics such as BLEU are not fully reliable to compare different systems, and results vary widely depending on the coverage of the systems and the specificity of the generation input.",
"cite_spans": [
{
"start": 307,
"end": 330,
"text": "Nakanishi et al. (2005)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Previous Work",
"sec_num": "6.1"
},
{
"text": "Though our dependency-based n-gram models perform well in both the English and Chinese experiments, we are surprised that experiments on English data produce better results than those for Chinese. It is widely accepted that English generation is more difficult than Chinese, due to morphological inflections and the somewhat less predictable word order of English compared to Chinese. This is reflected by the results of the baseline models. Chinese has a BLEU score of 0.5752 and 8.96% exact match, both are higher than those of English. However with feature augmentation and lexicalisation, the results for English data exceed Chinese. This is probably because of the following reasons:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "Data size of the English training set is more than twice that of Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "Grammatical functions are more fine-grained in English f-structures than those in Chinese. There are 32 GFs defined for English compared to 20 for Chinese in our input f-structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "Properties of the languages and data sets are different. For example, due to lack of inflection and case markers, many sequences of VPs in Chinese have to be treated as coordinates, whereas their counterparts in English act as different grammatical functions, e.g. (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "(2) \u2104 \u00de \u00b8 invest million build this construction 'invest million yuan to build the construction'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "This results in a total of 7,377 coordinates (4.32 per sentence) in the Chinese development data, compared to 2,699 (1.12 per sentence) in the English data. The most extreme case in the Chinese data features 14 coordinates of country names in a local f-structure. This may account for the low SSA score for the Chinese experiments, as many coordinates are tied in the n-gram scoring method and can not be ordered correctly. Examining the development data shows different types of coordination errors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "\u2022 syntactic coordinates, but not semantic coordinates, as in sentence (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "\u2022 syntactic and semantic coordinates, but usually expressed in a fixed order, e.g. (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "(3) \u00cd \u00d1\u00e8 reform opening-up 'reform and opening up'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "\u2022 syntactic and semantic coordinates, which can freely swap positions, e.g. (4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "(4) \u00fb \u00b0 \u00e1 plentiful energy and quick thinking 'energetic and agile'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "At the current stage, our n-gram generation model only keeps the most likely realisation for each local f-structure. We believe that packing all equivalent elements, like coordinates in a local fstructure into equivalent classes, and outputing nbest candidate realisations will greatly increase the SSA score and may also further benefit the efficiency of the algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Differences Between the Languages",
"sec_num": "6.2"
},
{
"text": "We have described a number of increasingly sophisticated n-gram models for sentence generation from labelled bilexical dependencies, in the form of LFG f-structures. The models include additional conditioning on parent GFs and different degrees of lexicalisation. Our method is simple, highly efficient, broad coverage and accurate in practice. We present experiments on English and Chinese, showing that the method generalises well to different languages and data sets. We are currently exploring further combinations of conditioning context and lexicalisation, application to different languages and to dependency representations used to train state-of-the-art dependency parsers (Nivre, 2006) .",
"cite_spans": [
{
"start": 682,
"end": 695,
"text": "(Nivre, 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Further Work",
"sec_num": "7"
},
{
"text": "F-structures can be also interpreted as quasi-logical forms (van Genabith andCrouch, 1996), which more closely resemble inputs used by some other generators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www2.parc.com/isl/groups/nltt/xle/ value structures used in HALogen and the functional descriptions in the FUF/SURGE system.4 N-Gram Models for Dependency-Based Generation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentences labelled as fragment are not included in our development and test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is funded by Science Foundation Ireland grant 04/IN/I527. We thank Aoife Cahill for providing the treebank-based LFG resources for the English data. We gratefully acknowledge the feedback provided by our anonymous reviewers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Exploiting a Probabilistic Hierarchical Model for Generation",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "42--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bangalore, Srinivas and Rambow, Owen. 2000. Ex- ploiting a Probabilistic Hierarchical Model for Gen- eration. Proceedings of the 18th International Conference on Computational Linguistics, 42-48. Saarbr\u00fccken, Germany.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Probabilistic Generation of Weather Forecast Texts",
"authors": [
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "164--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Belz, Anja. 2007. Probabilistic Generation of Weather Forecast Texts. Proceedings of the Conference of the North American Chapter of the Association for Com- putational Linguistics, 164-171. New York.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Long-Distance Dependency Resolution in Automatically Acquired Wide-Coverage PCFG-Based LFG Approximations",
"authors": [
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Burke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Ruth",
"middle": [],
"last": "Donovan",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "320--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cahill, Aoife, Burke, Michael, O'Donovan, Ruth, van Genabith, Josef and Way, Andy. 2004. Long- Distance Dependency Resolution in Automatically Acquired Wide-Coverage PCFG-Based LFG Ap- proximations. In Proceedings of the 42nd Annual Meeting of the Association for Computational Lin- guistics, 320-327. Barcelona, Spain.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Robust PCFG-Based Generation Using Automatically Acquired LFG Approximations",
"authors": [
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1033--1040",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cahill, Aoife and van Genabith, Josef. 2006. Ro- bust PCFG-Based Generation Using Automatically Acquired LFG Approximations. Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Asso- ciation for Computational Linguistics, 1033-1040. Sydney, Australia.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Stochastic Realisation Ranking for a Free Word Order Language",
"authors": [
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Forst",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Rohrer",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 11th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cahill, Aoife, Forst, Martin and Rohrer, Christian. 2007. Stochastic Realisation Ranking for a Free Word Order Language. Proceedings of the 11th Eu- ropean Workshop on Natural Language Generation, 17-24. Schloss Dagstuhl, Germany.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Evaluating Coverage for Large Symbolic NLG Grammars",
"authors": [
{
"first": "Charles",
"middle": [
"B"
],
"last": "Callaway",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "811--817",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Callaway, Charles B.. 2003. Evaluating Coverage for Large Symbolic NLG Grammars. Proceedings of the Eighteenth International Joint Conference on Artifi- cial Intelligence, 811-817. Acapulco, Mexico.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An efficient chart generator for (semi-)lexicalist grammars",
"authors": [
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Poznanski",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 7th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "86--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carroll, John, Copestake, Ann, Flickinger, Dan and Poznanski, Victor. 1999. An efficient chart gen- erator for (semi-)lexicalist grammars. Proceedings of the 7th European Workshop on Natural Language Generation, 86-95. Toulouse, France.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "FUF: The universal unifier user manual version 5.0",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elhadad, Michael. 1991. FUF: The universal unifier user manual version 5.0. Technical Report CUCS- 038-91. Dept. of Computer Science, Columbia Uni- versity.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Treebank-based Acquisition of LFG Resources for Chinese",
"authors": [
{
"first": "Yuqing",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of LFG07 Conference",
"volume": "",
"issue": "",
"pages": "214--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guo, Yuqing and van Genabith, Josef and Wang, Haifeng. 2007. Treebank-based Acquisition of LFG Resources for Chinese. Proceedings of LFG07 Con- ference, 214-232. Stanford, CA, USA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Exploiting Multi-Word Units in History-Based Probabilistic Generation",
"authors": [
{
"first": "Deirdre",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Conor",
"middle": [],
"last": "Cafferkey",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and CoNLL",
"volume": "",
"issue": "",
"pages": "267--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hogan, Deirdre Cafferkey, Conor Cahill, Aoife and van Genabith, Josef. 2007. Exploiting Multi-Word Units in History-Based Probabilistic Generation. Pro- ceedings of the 2007 Joint Conference on Empiri- cal Methods in Natural Language Processing and CoNLL, 267-276. Prague, Czech Republic.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Lexical Functional Grammar: a Formal System for Grammatical Representation. The Mental Representation of Grammatical Relations",
"authors": [
{
"first": "Ronald",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bresnan",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "173--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaplan, Ronald and Bresnan, Joan. 1982. Lexical Functional Grammar: a Formal System for Gram- matical Representation. The Mental Representation of Grammatical Relations, 173-282. MIT Press, Cambridge.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "LFG Generation Produces Context-free Languages",
"authors": [
{
"first": "Ronald",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Jurgen",
"middle": [],
"last": "Wedekind",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "425--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaplan, Ronald and Wedekind, Jurgen. 2000. LFG Generation Produces Context-free Languages. Pro- ceedings of the 18th International Conference on Computational Linguistics, 425-431. Saarbr\u00fccken, Germany.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Forest-Based Statistical Sentence Generation",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Langkilde",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of 1st Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "170--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Langkilde, Irene. 2000. Forest-Based Statistical Sen- tence Generation. Proceedings of 1st Meeting of the North American Chapter of the Association for Com- putational Linguistics, 170-177. Seattle, WA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An Empirical Verification of Coverage and Correctness for a General-Purpose Sentence Generator",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Langkilde",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Second International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Langkilde, Irene. 2002. An Empirical Verification of Coverage and Correctness for a General-Purpose Sentence Generator. Proceedings of the Second In- ternational Conference on Natural Language Gener- ation, 17-24. New York, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ann",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, Mitchell P., Santorini, Beatrice and Marcinkiewicz, Mary Ann. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Probabilistic Models for Disambiguation of an HPSG-Based Chart Generator",
"authors": [
{
"first": "Hiroko",
"middle": [],
"last": "Nakanishi",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Nakanishi",
"suffix": ""
},
{
"first": "Tsujii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 9th International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "93--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nakanishi, Hiroko and Nakanishi, Yusuke and Tsu- jii, Jun'ichi. 2005. Probabilistic Models for Dis- ambiguation of an HPSG-Based Chart Generator. Proceedings of the 9th International Workshop on Parsing Technology, 93-102. Vancouver, British Columbia.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Inductive Dependency Parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivre, Joakim. 2006. Inductive Dependency Parsing. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salim",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papineni, Kishore, Roukos, Salim, Ward, Todd and Zhu, Wei-Jing. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, 311-318. Philadelphia, USA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Trainable methods for natural language generation",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of NAACL 2000",
"volume": "",
"issue": "",
"pages": "194--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ratnaparkhi, Adwait. 2000. Trainable methods for nat- ural language generation. Proceedings of NAACL 2000, 194-201. Seattle, WA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SRILM-An Extensible Language Modeling Toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of International Conference of Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, Andreas. 2002. SRILM-An Extensible Lan- guage Modeling Toolkit. Proceedings of Interna- tional Conference of Spoken Language Processing. Denver, Colorado.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Direct and underspecified interpretations of LFG fstructures",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "Dick",
"middle": [],
"last": "Crouch",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th conference on Computational linguistics",
"volume": "",
"issue": "",
"pages": "262--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "van Genabith, Josef and Crouch, Dick. 1996. Di- rect and underspecified interpretations of LFG f- structures. Proceedings of the 16th conference on Computational linguistics, 262-267. Copenhagen, Denmark",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Maximum entropy models for realization ranking",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the MTSummit '05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Velldal, Erik and Oepen, Stephan. 2005. Maximum entropy models for realization ranking. Proceedings of the MTSummit '05.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Reining in CCG Chart Realization",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the third International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "White, Michael. 2004. Reining in CCG Chart Realiza- tion. Proceedings of the third International Natural Language Generation Conference. Hampshire, UK.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Towards Broad Coverage Surface Realization with CCG",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rajkumar",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the MT Summit XI Workshop",
"volume": "",
"issue": "",
"pages": "22--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "White, Michael, Rajkumar, Rajakrishnan and Martin, Scott. 2007. Towards Broad Coverage Surface Re- alization with CCG. Proceedings of the MT Summit XI Workshop, 22-30. Copenhagen, Danmark.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Penn Chinese TreeBank: Phrase Structure Annotation of a Large Corpus",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fei",
"suffix": ""
},
{
"first": "Fu",
"middle": [],
"last": "Chiou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural Language Engineering",
"volume": "11",
"issue": "2",
"pages": "207--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xue, Nianwen, Xia, Fei, Chiou, Fu dong and Palmer, Martha. 2005. The Penn Chinese TreeBank: Phrase Structure Annotation of a Large Corpus. Natural Language Engineering, 11(2): 207-238.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "C-and f-structures for the sentence We believe in the law of averages.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"num": null,
"text": "Examples of n-grams for f 4 in",
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"text": "The development set includes 50 files with 1,116 sentences.Table 2shows some of the characteristics of the English and Chinese data obtained from the development sets.",
"content": "<table><tr><td>sentences. Development Set</td><td colspan=\"2\">English Chinese</td></tr><tr><td>num of sent</td><td>1,700</td><td>1,116</td></tr><tr><td>max length of sent (#words)</td><td>110</td><td>145</td></tr><tr><td>ave length of sent (#words)</td><td>23</td><td>31</td></tr><tr><td>num of local fstr</td><td>23,289</td><td>15,847</td></tr><tr><td>num of local fstr per sent</td><td>13.70</td><td>14.20</td></tr><tr><td>max length of local fstr (#gfs)</td><td>12</td><td>16</td></tr><tr><td>ave length of local fstr (#gfs)</td><td>2.56</td><td>2.90</td></tr><tr><td/><td/><td>The training</td></tr><tr><td/><td/><td>set includes 756 files with a total of 15,663 sen-</td></tr><tr><td/><td/><td>tences. The test set includes 84 files with 1,708</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"text": "",
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF6": {
"num": null,
"text": "Results for English Penn-II WSJ section 23",
"content": "<table><tr><td>Test</td><td colspan=\"2\">Without Features</td><td/><td colspan=\"2\">Feature Names</td><td/><td>Feature Names &amp; Values</td></tr><tr><td>Model</td><td colspan=\"2\">ExMatch BLEU</td><td>SSA</td><td colspan=\"2\">ExMatch BLEU</td><td>SSA</td><td>ExMatch BLEU</td><td>SSA</td></tr><tr><td>baseline</td><td>8.96%</td><td colspan=\"2\">0.5752 51.92%</td><td>11.77%</td><td colspan=\"2\">0.6160 54.64%</td><td>12.30%</td><td>0.6239 55.20%</td></tr><tr><td>gf</td><td>9.54%</td><td colspan=\"2\">0.6009 53.02%</td><td>12.53%</td><td colspan=\"2\">0.6391 55.78%</td><td>13.47%</td><td>0.6486 56.60%</td></tr><tr><td>pred</td><td>10.07%</td><td colspan=\"2\">0.6180 53.80%</td><td>13.35%</td><td colspan=\"2\">0.6608 56.72%</td><td>14.46%</td><td>0.6720 57.67%</td></tr><tr><td>lex</td><td>13.93%</td><td colspan=\"2\">0.6639 59.61%</td><td>15.16%</td><td colspan=\"2\">0.6770 60.44%</td><td>15.98%</td><td>0.6804 60.20%</td></tr><tr><td>lex+gf</td><td>14.81%</td><td colspan=\"2\">0.6773 59.92%</td><td>15.52%</td><td colspan=\"2\">0.6911 60.97%</td><td>16.80%</td><td>0.6957 61.07%</td></tr><tr><td>lex+pred</td><td>16.04%</td><td colspan=\"2\">0.6952 60.82%</td><td>16.22%</td><td colspan=\"2\">0.7060 61.45%</td><td>17.51%</td><td>0.7123 61.54%</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF7": {
"num": null,
"text": "",
"content": "<table><tr><td/><td/><td colspan=\"5\">: Results for Chinese CTB6 test data</td></tr><tr><td>WSJ Sec23</td><td colspan=\"3\">Sentence length \u2264 20 words</td><td/><td/><td colspan=\"2\">All sentences</td></tr><tr><td/><td colspan=\"3\">Coverage ExMatch BLEU</td><td>SSA</td><td colspan=\"3\">Coverage ExMatch BLEU</td><td>SSA</td></tr><tr><td>Langkilde(2002)</td><td/><td/><td/><td/><td>82.7%</td><td>28.2%</td><td>0.757</td><td>69.6%</td></tr><tr><td>Callaway(2003)</td><td/><td/><td/><td/><td>98.7%</td><td>49.0%</td><td>88.84%</td></tr><tr><td>Nakanishi(2005)</td><td>90.75%</td><td/><td>0.7733</td><td/><td>83.6%</td><td/><td>0.705</td></tr><tr><td>Cahill(2006)</td><td>98.65%</td><td/><td colspan=\"2\">0.7077 73.73%</td><td>98.05%</td><td/><td>0.6651 68.08%</td></tr><tr><td>Hogan(2007)</td><td>100%</td><td/><td>0.7139</td><td/><td>99.96%</td><td/><td>0.6882 70.92%</td></tr><tr><td>White(2007)</td><td/><td/><td/><td/><td>94.3%</td><td>6.9%</td><td>0.5768</td></tr><tr><td>this paper</td><td>100%</td><td>35.40%</td><td colspan=\"2\">0.7625 81.09%</td><td>100%</td><td>19.83%</td><td>0.7440 75.34%</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF8": {
"num": null,
"text": "Cross system comparison of results for English WSJ section 23",
"content": "<table><tr><td>6 Discussion</td></tr></table>",
"type_str": "table",
"html": null
}
}
}
}