ACL-OCL / Base_JSON /prefixN /json /N07 /N07-1021.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N07-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:47:45.318721Z"
},
"title": "Probabilistic Generation of Weather Forecast Texts",
"authors": [
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Technology Group",
"institution": "University of Brighton",
"location": {
"country": "UK"
}
},
"email": "a.s.belz@brighton.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports experiments in which pCRU-a generation framework that combines probabilistic generation methodology with a comprehensive model of the generation space-is used to semi-automatically create several versions of a weather forecast text generator. The generators are evaluated in terms of output quality, development time and computational efficiency against (i) human forecasters, (ii) a traditional handcrafted pipelined NLG system, and (iii) a HALOGEN-style statistical generator. The most striking result is that despite acquiring all decision-making abilities automatically, the best pCRU generators receive higher scores from human judges than forecasts written by experts.",
"pdf_parse": {
"paper_id": "N07-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports experiments in which pCRU-a generation framework that combines probabilistic generation methodology with a comprehensive model of the generation space-is used to semi-automatically create several versions of a weather forecast text generator. The generators are evaluated in terms of output quality, development time and computational efficiency against (i) human forecasters, (ii) a traditional handcrafted pipelined NLG system, and (iii) a HALOGEN-style statistical generator. The most striking result is that despite acquiring all decision-making abilities automatically, the best pCRU generators receive higher scores from human judges than forecasts written by experts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Over the last decade, there has been a lot of interest in statistical techniques among researchers in natural language generation (NLG), a field that was largely unaffected by the statistical revolution in NLP that started in the 1980s. Since Langkilde and Knight's influential work on statistical surface realisation (Knight and Langkilde, 1998) , a number of statistical and corpus-based methods have been reported. However, this interest does not appear to have translated into practice: of the 30 implemented systems and modules with development starting in or after 2000 that are listed on a key NLG website 1 , only five have any statistical component at all (another six involve techniques that are in some way corpus-based). The likely reasons for this lack of take-up are that (i) many existing statistical NLG techniques are inherently expensive, requiring the set of alternatives to be generated in full before the statistical model is applied to select the most likely; and (ii) statistical NLG techniques have not been shown to produce outputs of high enough quality.",
"cite_spans": [
{
"start": 318,
"end": 346,
"text": "(Knight and Langkilde, 1998)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and background",
"sec_num": "1"
},
{
"text": "There has also been a rethinking of the traditional modular NLG architecture (Reiter, 1994) . Some research has moved towards a more comprehensive view, e.g. construing the generation task as a single constraint satisfaction problem. Precursors to current approaches were Hovy's PAULINE which kept track of the satisfaction status of global 'rhetorical goals' (Hovy, 1988) , and Power et al.'s ICON-OCLAST which allowed users to fine-tune different combinations of global constraints (Power, 2000) . In recent comprehensive approaches, the focus is on automatic adaptability, e.g. automatically determining degrees of constraint violability on the basis of corpus frequencies. Examples include Langkilde's (2005) general approach to generation and parsing based on constraint optimisation, and Marciniak and Strube's (2005) integrated, globally optimisable network of classifiers and constraints.",
"cite_spans": [
{
"start": 77,
"end": 91,
"text": "(Reiter, 1994)",
"ref_id": "BIBREF17"
},
{
"start": 360,
"end": 372,
"text": "(Hovy, 1988)",
"ref_id": "BIBREF8"
},
{
"start": 484,
"end": 497,
"text": "(Power, 2000)",
"ref_id": "BIBREF15"
},
{
"start": 694,
"end": 712,
"text": "Langkilde's (2005)",
"ref_id": "BIBREF11"
},
{
"start": 794,
"end": 823,
"text": "Marciniak and Strube's (2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and background",
"sec_num": "1"
},
{
"text": "Both probabilistic and recent comprehensive trends have developed at least in part to address two interrelated issues in NLG: the considerable amount of time and expense involved in building new systems, and the almost complete lack in the field of reusable systems and modules. Both trends have the potential to improve on development time and reusability, but have drawbacks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and background",
"sec_num": "1"
},
{
"text": "Existing statistical NLG (i) uses corpus statistics to inform heuristic decisions in what is otherwise symbolic generation (Varges and Mellish, 2001; White, 2004; Paiva and Evans, 2005) ; (ii) applies n-gram models to select the overall most likely realisation after generation (HALOGEN family); or (iii) reuses an existing parsing grammar or treebank for surface realisation (Velldal et al., 2004; Cahill and van Genabith, 2006) . N -gram models are not linguistically informed, (i) and (iii) come with a substantial manual overhead, and (ii) overgenerates vastly and has a high computational cost (see also Section 3).",
"cite_spans": [
{
"start": 123,
"end": 149,
"text": "(Varges and Mellish, 2001;",
"ref_id": "BIBREF19"
},
{
"start": 150,
"end": 162,
"text": "White, 2004;",
"ref_id": "BIBREF21"
},
{
"start": 163,
"end": 185,
"text": "Paiva and Evans, 2005)",
"ref_id": "BIBREF13"
},
{
"start": 376,
"end": 398,
"text": "(Velldal et al., 2004;",
"ref_id": "BIBREF20"
},
{
"start": 399,
"end": 429,
"text": "Cahill and van Genabith, 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and background",
"sec_num": "1"
},
{
"text": "Existing comprehensive approaches tend to incur a manual overhead (finetuning in ICONOCLAST, corpus annotation in Langkilde and Marciniak & Strube) . Handling violability of soft constraints is problematic, and converting corpus-derived probabilities into costs associated with constraints (Langkilde, Marciniak & Strube) turns straightforward statistics into an ad hoc search heuristic. Older approaches are not globally optimisable (PAULINE) or involve exhaustive search (ICONOCLAST).",
"cite_spans": [
{
"start": 114,
"end": 147,
"text": "Langkilde and Marciniak & Strube)",
"ref_id": null
},
{
"start": 290,
"end": 321,
"text": "(Langkilde, Marciniak & Strube)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and background",
"sec_num": "1"
},
{
"text": "The pCRU language generation framework combines a probabilistic generation methodology with a comprehensive model of the generation space, where probabilistic choice informs generation as it goes along, instead of after all alternatives have been generated. pCRU uses existing techniques (Belz, 2005) , but extends these substantially. This paper describes the pCRU framework and reports experiments designed to rigorously test pCRU in practice and to determine whether improvements in development time and reusability can be achieved without sacrificing quality of outputs.",
"cite_spans": [
{
"start": 288,
"end": 300,
"text": "(Belz, 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and background",
"sec_num": "1"
},
{
"text": "pCRU (Belz, 2006 ) is a probabilistic language generation framework that was developed with the aim of providing the formal underpinnings for creating NLG systems that are driven by comprehensive probabilistic models of the entire generation space (including deep generation). NLG systems tend to be composed of generation rules that apply transformations to representations (performing different tasks in different modules). The basic idea in pCRU is that as long as the generation rules are all of the form relation(arg1, ...argn) \u2192 relation1(arg1, ...argp) ... relationm(arg1, ...argq), m \u2265 1, n, p, q \u2265 0, then the set of all generation rules can be seen as defining a context-free language and a single probabilistic model can be estimated from raw or annotated text to guide generation processes.",
"cite_spans": [
{
"start": 5,
"end": 16,
"text": "(Belz, 2006",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "pCRU language generation",
"sec_num": "2"
},
{
"text": "pCRU uses straightforward context-free technology in combination with underspecification techniques, to encode a base generator as a set of expansion rules G composed of n-ary relations with variable and constant arguments (Section 2.1). In non-probabilistic mode, the output is the set of fully expanded (fully specified) forms that can be derived from the input. The pCRU (probabilistic CRU) decision-maker is created by estimating a probability distribution over the base generator from an unannotated corpus of example texts. This distribution is used in one of several ways to drive generation processes, maximising the likelihood either of individual expansions or of entire generation processes (Section 2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "pCRU language generation",
"sec_num": "2"
},
{
"text": "Using context-free representational underspecification, or CRU, (Belz, 2004) , the generation space is encoded as (i) a set G of expansion rules composed of n-ary relations relation(arg 1 , ...arg n ) where the arg i are constants or variables over constants; and (ii) argument and relation type hierarchies. Any sentential form licensed by G can be the input to the generation process which expands it under unifying variable substitution until no further expansion is possible. The output (in non-probabilistic mode) is the set of fully expanded forms (i.e. consisting only of terminals) that can be derived from the input.",
"cite_spans": [
{
"start": 64,
"end": 76,
"text": "(Belz, 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Specifying the range of alternatives",
"sec_num": "2.1"
},
{
"text": "The rules in G define the steps in which inputs can be incrementally specified from, say, content to semantic, syntactic and finally surface representations. G therefore defines specificity relations between all sentential forms, i.e. defines which representation is underspecified with respect to which other representations. The generation process is construed explicitly as the task of incrementally specifying one or more word strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Specifying the range of alternatives",
"sec_num": "2.1"
},
{
"text": "Within the limits of context-freeness and atomicity of feature values, CRU is neutral with respect to actual linguistic knowledge representation formalisms used to encode generation spaces. The main motivation for a context-free formalism is the advantage of low computational cost, while the inclusion of arguments on (non)terminals permits keeping track of contextual features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Specifying the range of alternatives",
"sec_num": "2.1"
},
{
"text": "The pCRU decision-making component is created by estimating a probability distribution over the set of expansion rules that encodes the generation space (the base generator), as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection among alternatives",
"sec_num": "2.2"
},
{
"text": "1 Convert corpus into multi-treebank: determine for each sentence all (left-most) derivation trees licensed by the base generator's CRU rules, using maximal partial derivations if there is no complete derivation tree; annotate the (sub)strings in the sentence with the derivation trees, resulting in a set of generation trees for the sentence. 2 Train base generator: Obtain frequency counts for each individual generation rule from the multitreebank, adding 1/n to the count for every rule, where n is the number of alternative derivation trees; convert counts into probability distributions over alternative rules, using add-1 smoothing and standard maximum likelihood estimation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection among alternatives",
"sec_num": "2.2"
},
{
"text": "The resulting probability distribution is used in one of the following three ways to control generation. Of these, only the first requires the generation forest to be created in full, whereas both greedy modes prune the generation space to a single path: 1 Viterbi generation: do a Viterbi search of the generation forest for a given input, which maximises the joint likelihood of all decisions taken in the generation process. This selects the most likely generation process, but is considerably more expensive than the greedy modes. 2 Greedy generation: make the single most likely decision at each choice point (rule expansion) in a generation process. This is not guaranteed to result in the most likely generation process, but the computational cost is very low. 3 Greedy roulette-wheel generation: use a nonuniform random distribution proportional to the likelihoods of alternatives. E.g. if there are two alternative decisions D 1 and D 2 , with the model giving p(D 1 ) = 0.8 and p(D 2 ) = 0.2, then the proportion of times the generator decides D 1 approaches 80% and D 2 20% in the limit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection among alternatives",
"sec_num": "2.2"
},
{
"text": "The technology described in the two preceding sections has been implemented in the pCRU-1.0 software package. The user defines a generation space by creating a base generator composed of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The pCRU-1.0 generation package",
"sec_num": "2.3"
},
{
"text": "1. the set N of underspecified n-ary relations 2. the set W of fully specified n-ary relations 3. a set R of context-free generation rules",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The pCRU-1.0 generation package",
"sec_num": "2.3"
},
{
"text": "n \u2192 \u03b1, n \u2208 N , \u03b1 \u2208 (W \u222a N ) *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The pCRU-1.0 generation package",
"sec_num": "2.3"
},
{
"text": "This base generator is then trained (as described above) on raw text corpora to provide a probability distribution over generation rules. Optionally, an ngram language model can also be created from the same corpus. The generator is then run in one of the three modes above or one of the following: The random mode serves as a baseline for generation quality: a trained generator must be able to do better, otherwise all the work is done by the base generator (and none by the probabilities). The ngram mode works exactly like HALOGEN-style generation: the generator generates all realisations that the rules allow and then picks one based on the ngram model. This is a point of comparison with existing statistical NLG techniques and also serves as a baseline in terms of computational expense: a generator using pCRU probabilities should be able to produce realisations faster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "a typed feature hierarchy defining argument types and values",
"sec_num": "4."
},
{
"text": "The automatic generation of weather forecasts is one of the success stories of NLP. The restrictiveness of the sublanguage has made the domain of weather forecasting particularly attractive to NLG researchers, and a number of weather forecast generation systems have been created.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building and evaluating pCRU wind forecast text generators",
"sec_num": "3"
},
{
"text": "A recent example of weather forecast text generation is the SUMTIME project (Reiter et al., 2005) which developed a commercially used NLG system that generates marine weather forecasts for offshore oil rigs from numerical forecast data produced by weather simulation programs. The SUMTIME corpus is used in the experiments below.",
"cite_spans": [
{
"start": 76,
"end": 97,
"text": "(Reiter et al., 2005)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building and evaluating pCRU wind forecast text generators",
"sec_num": "3"
},
{
"text": "Each instance in the SUMTIME corpus consists of three numerical data files (the outputs of weather simulators) and the forecast file written by the forecaster on the basis of the data (Figure 1 shows an example). The experiments below focused on a.m. forecasts of wind characteristics. Content determination (deciding which meteorological data to include in a forecast) was carried out off-line.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 193,
"text": "(Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The corpus consists of 2,123 instances (22,985 words) of which half are a.m. forecasts. This may not seem much, but considering the small number of vocabulary items and syntactic structures, the corpus provides extremely good coverage (an initial impression confirmed by the small differences between training and testing data results below).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The base generator 2 was written semi-automatically in two steps. First, a simple chunker was run over the corpus to split wind statements 2 For a fragment of the rule set, see Belz (2006) .",
"cite_spans": [
{
"start": 177,
"end": 188,
"text": "Belz (2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The base generator",
"sec_num": "3.2"
},
{
"text": "into wind direction, wind speed, gust speed, gust statements, time expressions, verb phrases, pre-modifiers, and post-modifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The base generator",
"sec_num": "3.2"
},
{
"text": "Preterminal generation rules were automatically created from the resulting chunks. Then, higher-level rules which combine chunks into larger components, taking care of text structuring, aggregation and elision, were manually authored. The top-level generation rules interpret wind statements as sequences of independent units of information, ensuring a linear increase in complexity with increasing input length. Inputs encode meteorological data (as shown in Table 1), and were pre-processed to determine certain types of information, including whether a change in wind direction was clockwise or anti-clockwise, and whether change in wind speed was an increase or a decrease. The final generator takes as inputs number vectors of length 7 to 60, and generates up to 1.6 \u00d7 10 31 alternative realisations for an input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The base generator",
"sec_num": "3.2"
},
{
"text": "The job of the base generator is to describe the textual variety found in the corpus. It makes no decisions about when to prefer one variant over another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The base generator",
"sec_num": "3.2"
},
{
"text": "The corpus was divided at random into 90% training data and 10% testing data. The training set was multi-treebanked with the base generator and the multi-treebank then used to create the probability distribution for the base generator (as described in Section 2.2). A back-off 2-gram model with Good-Turing discounting and no lexical classes was also created from the training set, using the SRILM toolkit, (Stolcke, 2002) . pCRU-1.0 was then run in all five modes to generate forecasts for the inputs in both training and test sets.",
"cite_spans": [
{
"start": 407,
"end": 422,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "This procedure was repeated five times for holdout cross-validation. The small amount of variation across the five repeats, and the small differences between results for training and test sets (Table 2) indicated that five repeats were sufficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "The two automatic metrics used in the evaluations, NIST and BLEU have been shown to correlate highly with expert judgments (Pearson correlation coefficients 0.82 and 0.79 respectively) in this domain (Belz and Reiter, 2006) .",
"cite_spans": [
{
"start": 200,
"end": 223,
"text": "(Belz and Reiter, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "3.4.1"
},
{
"text": "Input [[1, SSW, 16, 20, 0600] , [2, SSE, NOTIME] Table 1 : Forecast texts (for 05-10-2000) generated by each of the pCRU generators, the SUMTIME-Hybrid system and three experts. The corresponding input to the generators is shown in the first row.",
"cite_spans": [
{
"start": 6,
"end": 10,
"text": "[[1,",
"ref_id": null
},
{
"start": 11,
"end": 15,
"text": "SSW,",
"ref_id": null
},
{
"start": 16,
"end": 19,
"text": "16,",
"ref_id": null
},
{
"start": 20,
"end": 23,
"text": "20,",
"ref_id": null
},
{
"start": 24,
"end": 29,
"text": "0600]",
"ref_id": null
},
{
"start": 32,
"end": 35,
"text": "[2,",
"ref_id": null
},
{
"start": 36,
"end": 40,
"text": "SSE,",
"ref_id": null
},
{
"start": 41,
"end": 48,
"text": "NOTIME]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "3.4.1"
},
{
"text": "BLEU (Papineni et al., 2002 ) is a precision metric that assesses the quality of a translation in terms of the proportion of its word n-grams (n \u2264 4 has become standard) that it shares with several reference translations. BLEU also incorporates a 'brevity penalty' to counteract scores increasing as length decreases. BLEU scores range from 0 to 1.",
"cite_spans": [
{
"start": 5,
"end": 27,
"text": "(Papineni et al., 2002",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "3.4.1"
},
{
"text": "The NIST metric (Doddington, 2002) is an adaptation of BLEU, but where BLEU gives equal weight to all n-grams, NIST gives more weight to less frequent (hence more informative) n-grams. There is evidence that NIST correlates better with human judgments than BLEU (Doddington, 2002; Belz and Reiter, 2006) .",
"cite_spans": [
{
"start": 16,
"end": 34,
"text": "(Doddington, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 262,
"end": 280,
"text": "(Doddington, 2002;",
"ref_id": "BIBREF6"
},
{
"start": 281,
"end": 303,
"text": "Belz and Reiter, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "3.4.1"
},
{
"text": "The results below include human scores from two separate experiments. The first was an experiment with 9 subjects experienced in reading marine forecasts (Belz and Reiter, 2006) , the second is a new experiment with 14 similarly experienced subjects 3 . The main differences were that in Experiment 1, subjects rated on a scale from 0 to 5 and were asked for overall quality scores, whereas in Experiment 2, subjects rated on a 1-7 scale and were asked for language quality scores.",
"cite_spans": [
{
"start": 154,
"end": 177,
"text": "(Belz and Reiter, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "3.4.1"
},
{
"text": "In comparing different pCRU modes, NIST and BLEU scores were computed against the test set part of the corpus which contains texts by five different authors. In the two human experiments, NIST and BLEU scores were computed against sets of multiple reference texts (2 for each date in Experiment 1, and 3 in Experiment 2) written by forecasters who had not contributed to the corpus. One-way ANOVAs with post-hoc Tukey HSD tests were used to analyse variance and statistical significance of all results. Table 1 shows forecast texts generated by each of 3 the systems included in the evaluations reported below, together with the corresponding input and three texts created by humans for the same data. Table 2 shows results for the five different pCRU generation modes, for training sets (top) and test sets (bottom), in terms of NIST-5 and BLEU-4 scores averaged over the five runs of the hold-out validation, with average mean deviation figures across the runs shown in brackets.",
"cite_spans": [
{
"start": 553,
"end": 554,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 503,
"end": 510,
"text": "Table 1",
"ref_id": null
},
{
"start": 702,
"end": 709,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "3.4.1"
},
{
"text": "The Tukey Test produced the following results for the differences between means in Table 2 . For the training set, results are the same for NIST and BLEU scores: all differences are significant at P < 0.01, except for the differences in scores for pCRU-2gram and pCRU-viterbi. For the test set and NIST, again all differences are significant at P < 0.01, except for pCRU-2gram vs. pCRU-viterbi. For the test set and BLEU, three differences are non-significant: pCRU-2gram vs. pCRU-viterbi, pCRU-2gram vs. pCRU- roulette, and pCRU-viterbi vs. pCRU-roulette. NIST-5 depends on test set size, and is necessarily lower for the (smaller) test set, but the BLEU-4 scores indicate that performance was slightly worse on test sets. The deviation figures show that variation was also higher on the test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparing different generation modes",
"sec_num": "3.4.2"
},
{
"text": "The clearest result is that pCRU-greedy is ranked highest, and pCRU-random lowest, by considerable margins. pCRU-roulette is ranked second by NIST-5 and fourth by BLEU-4. pCRU-2gram and pCRUviterbi are virtually indistinguishable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing different generation modes",
"sec_num": "3.4.2"
},
{
"text": "Experts in both human experiments agreed with the NIST-5 rankings of the modes exactly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing different generation modes",
"sec_num": "3.4.2"
},
{
"text": "The pCRU modes were also evaluated against the SUMTIME-Hybrid system (running in 'hybrid' mode, taking inputs as in Table 1 ). Table 3 shows averaged evaluation scores by subjects in the two independent experiments described above. There were altogether 6 and 7 systems evaluated in these experiments, respectively, and the differences between the scores shown here were not significant when subjected to the Tukey Test, meaning that both experiments failed to show that experts can tell the difference in the language quality of the texts generated by the handcrafted SUMTIME-Hybrid system and the two best pCRU-greedy systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 123,
"text": "Table 1",
"ref_id": null
},
{
"start": 127,
"end": 134,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Text quality against handcrafted system",
"sec_num": "3.4.3"
},
{
"text": "In the first experiment, the human evaluators gave an average score of 3.59 to pCRU-greedy, 3.22 to the corpus texts, and 3.03 to another (human) forecaster. In Experiment 2, the average human scores were 4.79 for pCRU-greedy, and 4.50 for the corpus texts. Although in each experiment separately, statistical significance could not be shown for the differences between these means, in combination the scores provide evidence that the evaluators thought pCRU-greedy better than the human-written texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text quality against human forecasters",
"sec_num": "3.4.4"
},
{
"text": "The following table shows average number of seconds taken to generate one forecast, averaged over the five cross-validation runs (mean variation figures across the runs in brackets):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing time",
"sec_num": "3.4.5"
},
{
"text": "Test sets pCRU-greedy: 1.65s (= 0.02) 1.58s (< 0.04) pCRU-roulette: 1.61s (< 0.02) 1.58s (< 0.05) pCRU-viterbi:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training sets",
"sec_num": null
},
{
"text": "1.74s (< 0.02) 1.70s (= 0.04) pCRU-2gram:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training sets",
"sec_num": null
},
{
"text": "2.83s (< 0.02) 2.78s (< 0.09) Forecasts for the test sets were generated somewhat faster than for the training sets in all modes. Variation was greater for test sets. Differences between pCRU-greedy and pCRU-roulette are very small, but pCRU-viterbi took 1/10 of a second longer, and pCRU-2gram took more than 1 second longer to generate the average forecast 4 .",
"cite_spans": [
{
"start": 21,
"end": 29,
"text": "(< 0.09)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training sets",
"sec_num": null
},
{
"text": "N -gram models have a built-in bias in favour of shorter strings, because they calculate the likelihood of a string of words as the joint probability of the words, or, more precisely, as the product of the probabilities of each word given the n \u2212 1 preceding words. The likelihood of any string will therefore generally be lower than that of any of its substrings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brevity bias",
"sec_num": "3.4.6"
},
{
"text": "Using a smaller data set for which all systems had outputs, the average number of words in the forecasts generated by the different systems was: pCRU-random:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brevity bias",
"sec_num": "3.4.6"
},
{
"text": "19.43 SUMTIME-Hybrid: 12.39 pCRU-greedy:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brevity bias",
"sec_num": "3.4.6"
},
{
"text": "11.51 Corpus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brevity bias",
"sec_num": "3.4.6"
},
{
"text": "11.28 pCRU-roulette:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brevity bias",
"sec_num": "3.4.6"
},
{
"text": "10.48 pCRU-2gram:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brevity bias",
"sec_num": "3.4.6"
},
{
"text": "7.66 pCRU-viterbi:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brevity bias",
"sec_num": "3.4.6"
},
{
"text": "7.54 pCRU-random has no preference for shorter strings, its average string length is almost twice that of the other pCRU-generators. The 2-gram generator prefers shorter strings, while the Viterbi generator prefers shorter generation processes, and these preferences result in the shortest texts. The poor evaluation results above for the n-gram and Viterbi generators indicate that this brevity bias can be harm-ful in NLG. The remaining generators achieve good matches to the average forecast length in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brevity bias",
"sec_num": "3.4.6"
},
{
"text": "The most time-consuming part of NLG system development is not encoding the range of alternatives, but the decision-making capabilities that enable selection among them. In SUMTIME (Section 3), these were the result of corpus analysis and consultation with writers and readers of marine forecasts. In the pCRU wind forecast generators, the decision-making capabilities are acquired automatically, no expert knowledge or corpus annotation is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development time",
"sec_num": "3.4.7"
},
{
"text": "The SUMTIME team estimate 5 that very approximately 12 person months went directly into developing the SUMTIME microplanner and realiser (the components functionally analogous to the pCRUgenerators), and 24 on generic activities such as expert consultation, which also benefited the microplanner/realiser. The pCRU wind forecasters were built in less than a month, including familiarisation with the corpus, building the chunker and creating the generation rules themselves. However, the SUMTIME system also generates wave forecasts and appropriate layout and canned text. A generous estimate is that it would take another two person months to equip the pCRU forecaster with these capabilities. This is not to say that the two research efforts resulted in exactly the same thing. It is clear that forecast readers prefer the SUMTIME system, but the point is that it did come with a substantial price tag attached. The pCRU approach allows control over the trade-off between cost and quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development time",
"sec_num": "3.4.7"
},
{
"text": "The main contributions of the research described in this paper are: (i) a generation methodology that improves substantially on development time and reusability compared to traditional hand-crafted systems; (ii) techniques for training linguistically informed decision-making components for probabilistic NLG from raw corpora; and (iii) results that show that probabilistic NLG can produce high-quality text. Results also show that (i) a preference for shorter realisations can be harmful in NLG; and that (ii) linguistically literate, probabilistic NLG can outper- An interesting question concerns the contribution of the manually built component (the base generator) to the quality of the outputs. The random mode serves as an absolute baseline in this respect: it indicates how well a particular base generator performs on its own. However, different base generators have different effects on the generation modes. The base generator that was used in previous experiments (Belz, 2005 ) encoded a less structured generation space and the set of concepts it used were less fine-grained (e.g. it did not distinguish between an increase and a decrease in wind speed, considering both simply a change), and therefore it lacked some information necessary for deriving conditional probabilities for lexical choice (e.g. freshening vs. easing). As predicted (Belz, 2005, p. 21) , improvements to the base generator made little difference to the results for pCRU-2gram (up from BLEU 0.45 to 0.5), but greatly improved the performance of the greedy mode (up from 0.43 to 0.64).",
"cite_spans": [
{
"start": 975,
"end": 986,
"text": "(Belz, 2005",
"ref_id": "BIBREF2"
},
{
"start": 1353,
"end": 1372,
"text": "(Belz, 2005, p. 21)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "A basic question for statistical NLG is whether surface string likelihoods are enough to resolve remaining non-determinism in generators, or whether likelihoods at the more abstract level of generation rules are needed. The former always prefers the most frequent variant regardless of context, whereas in the latter probabilities can attach to linguistic objects and be conditioned on contextual features (e.g. one useful feature in the forecast text generators encoded whether a rule was being applied at the beginning of a text). The results reported in this paper provide evidence that probabilistic generation can be more powerful than n-gram based post-selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "The pCRU approach to generation makes it possible to combine the potential accuracy and subtlety of symbolic generation rules with detailed linguistic features on the one hand, and the robustness and handle on nondeterminism provided by probabilities associated with these rules, on the other. The evaluation results for the pCRU generators show that outputs of high quality can be produced with this approach, that it can speed up development and improve reusability of systems, and that in some modes it is more efficient and less brevity-biased than existing HALOGEN-style n-gram techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The current situation in NLG recalls NLU in the late 1980s, when symbolic and statistical NLP were separate research paradigms, a situation memorably caricatured by Gazdar (1996) , before rapidly moving towards a paradigm merger in the early 1990s. A similar development is currently underway in MT where -after several years of statistical MT dominating the field -researchers are now beginning to bring linguistic knowledge into statistical techniques (Charniak et al., 2003; Huang et al., 2006) , and this trend looks set to continue. The lesson from NLU and MT appears to be that higher quality results when the symbolic and statistical paradigms join forces. The research reported in this paper is intended to be a first step in this direction for NLG.",
"cite_spans": [
{
"start": 165,
"end": 178,
"text": "Gazdar (1996)",
"ref_id": "BIBREF7"
},
{
"start": 454,
"end": 477,
"text": "(Charniak et al., 2003;",
"ref_id": "BIBREF5"
},
{
"start": 478,
"end": 497,
"text": "Huang et al., 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Bateman and Zock's list of NLG systems, http://www.fb10.uni-bremen.de/anglistik/ langpro/NLG-table/, 20/01/2006.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Viterbi and the 2-gram generator were implemented identically, except for the n-gram model look-up.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was in part supported under UK EPSRC Grant GR/S24480/01. Many thanks to the anonymous reviewers for very helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Comparing automatic and human evaluation of NLG systems",
"authors": [
{
"first": "A",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. EACL'06",
"volume": "",
"issue": "",
"pages": "313--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Belz and E. Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In Proc. EACL'06, pages 313-320.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Context-free representational underspecification for NLG",
"authors": [
{
"first": "A",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Belz. 2004. Context-free representational underspec- ification for NLG. Technical Report ITRI-04-08, Uni- versity of Brighton.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical generation: Three methods compared and evaluated",
"authors": [
{
"first": "A",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ENLG'05",
"volume": "",
"issue": "",
"pages": "15--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Belz. 2005. Statistical generation: Three methods compared and evaluated. In Proc. of ENLG'05, pages 15-23.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "pCRU: Probabilistic generation using representational underspecification",
"authors": [
{
"first": "A",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Belz. 2006. pCRU: Probabilistic generation using representational underspecification. Technical Report NLTG-06-01, University of Brighton.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Robust PCFGbased generation using automatically acquired LFG approximations",
"authors": [
{
"first": "A",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. ACL'06",
"volume": "",
"issue": "",
"pages": "1033--1077",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Cahill and J. van Genabith. 2006. Robust PCFG- based generation using automatically acquired LFG approximations. In Proc. ACL'06, pages 1033-44.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Syntaxbased language models for machine translation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Yamada",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. MT Summit IX",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak, K. Knight, and K. Yamada. 2003. Syntax- based language models for machine translation. In Proc. MT Summit IX.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics",
"authors": [
{
"first": "G",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ARPA Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statis- tics. In Proceedings of the ARPA Workshop on Human Language Technology.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Paradigm merger in NLP",
"authors": [
{
"first": "G",
"middle": [],
"last": "Gazdar",
"suffix": ""
}
],
"year": 1996,
"venue": "Computing Tomorrow: Future Research Directions in Computer Science",
"volume": "",
"issue": "",
"pages": "88--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Gazdar. 1996. Paradigm merger in NLP. In Robin Milner and Ian Wand, editors, Computing Tomor- row: Future Research Directions in Computer Sci- ence, pages 88-109. Cambridge University Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generating Natural Language under Pragmatic Constraints",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Hovy. 1988. Generating Natural Language under Pragmatic Constraints. Lawrence Erlbaum.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Statistical syntax-directed translation with extended domain of locality",
"authors": [
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. AMTA",
"volume": "",
"issue": "",
"pages": "66--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Huang, K. Knight, and A. Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proc. AMTA, pages 66-73.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Generation that exploits corpus-based statistical knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Langkilde",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL'98",
"volume": "",
"issue": "",
"pages": "704--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Knight and I. Langkilde. 1998. Generation that ex- ploits corpus-based statistical knowledge. In Proceed- ings of COLING-ACL'98, pages 704-710.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An exploratory application of constraint optimization in Mozart to probabilistic natural language processing",
"authors": [
{
"first": "I",
"middle": [],
"last": "Langkilde",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CSLP'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Langkilde. 2005. An exploratory application of con- straint optimization in Mozart to probabilistic natural language processing. In Proceedings of CSLP'05, vol- ume 3438 of LNAI. Springer-Verlag.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using an annotated corpus as a knowledge source for language generation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Marciniak",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of UCNLG'05",
"volume": "",
"issue": "",
"pages": "19--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Marciniak and M. Strube. 2005. Using an annotated corpus as a knowledge source for language generation. In Proceedings of UCNLG'05, pages 19-24.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Empirically-based control of natural language generation",
"authors": [
{
"first": "D",
"middle": [
"S"
],
"last": "Paiva",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Evans",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings ACL'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. S. Paiva and R. Evans. 2005. Empirically-based con- trol of natural language generation. In Proceedings ACL'05.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W.-J",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proc. ACL '02, pages 311-318.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Planning texts by constraint satisfaction",
"authors": [
{
"first": "R",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of COLING'00",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Power. 2000. Planning texts by constraint satisfaction. In Proceedings of COLING'00.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Choosing words in computer-generated weather forecasts",
"authors": [
{
"first": "E",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sripada",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hunter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2005,
"venue": "Artificial Intelligence",
"volume": "167",
"issue": "",
"pages": "137--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Reiter, S. Sripada, J. Hunter, and J. Yu. 2005. Choos- ing words in computer-generated weather forecasts. Artificial Intelligence, 167:137-169.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Has a consensus NL generation architecture appeared and is it psycholinguistically plausible?",
"authors": [
{
"first": "E",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of INLG'94",
"volume": "",
"issue": "",
"pages": "163--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Reiter. 1994. Has a consensus NL generation architec- ture appeared and is it psycholinguistically plausible? In Proceedings of INLG'94, pages 163-170.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SRILM: An extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICSLP'02",
"volume": "",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke. 2002. SRILM: An extensible language mod- eling toolkit. In Proceedings of ICSLP'02, pages 901- 904,.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Instance-based NLG",
"authors": [
{
"first": "S",
"middle": [],
"last": "Varges",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mellish",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of NAACL'01",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Varges and C. Mellish. 2001. Instance-based NLG. In Proc. of NAACL'01, pages 1-8.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Paraphrasing treebanks for stochastic realization ranking",
"authors": [
{
"first": "E",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of TLT'04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Velldal, S. Oepen, and D. Flickinger. 2004. Para- phrasing treebanks for stochastic realization ranking. In Proc. of TLT'04.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Reining in CCG chart realization",
"authors": [
{
"first": "M",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings INLG'04",
"volume": "3123",
"issue": "",
"pages": "182--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. White. 2004. Reining in CCG chart realization. In Proceedings INLG'04, volume 3123 of LNAI, pages 182-191. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "1. Random: ignoring pCRU probabilities, randomly select generation rules. 2. N -gram: ignoring pCRU probabilities, generate set of alternatives and select the most likely according to the n-gram language model."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Meteorological data file and wind forecast for 05-10-2000, a.m. (oil fields anonymised)."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Personal communication with E. Reiter and S. Sripada. form HALOGEN-style shallow statistical methods, in terms of quality and efficiency."
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table><tr><td/><td>,[3,VAR,04,08,-,-,2400]]</td></tr><tr><td>Reference 1</td><td/></tr><tr><td>Reference 2</td><td/></tr><tr><td>pCRU-roulette</td><td>SSW 16-20 GRADUALLY BACKING SSE AND VARIABLE 4-8</td></tr><tr><td>pCRU-viterbi</td><td>SSW 16-20 BACKING SSE VARIABLE 4-8 LATER</td></tr><tr><td>pCRU-2gram</td><td>SSW 16-20 BACKING SSE VARIABLE 4-8 LATER</td></tr><tr><td>pCRU-random</td><td>SSW 16-20 AT FIRST FROM MIDDAY BECOMING SSE DURING THE AFTERNOON THEN VARIABLE 4-8</td></tr></table>",
"type_str": "table",
"text": "Corpus SSW 16-20 GRADUALLY BACKING SSE THEN FALLING VARIABLE 4-8 BY LATE EVENING SSW'LY 16-20 GRADUALLY BACKING SSE'LY THEN DECREASING VARIABLE 4-8 BY LATE EVENING SSW 16-20 GRADUALLY BACKING SSE BY 1800 THEN FALLING VARIABLE 4-8 BY LATE EVENING SUMTIME-Hyb. SSW 16-20 GRADUALLY BACKING SSE THEN BECOMING VARIABLE 10 OR LESS BY MIDNIGHT pCRU-greedy SSW 16-20 BACKING SSE FOR A TIME THEN FALLING VARIABLE 4-8 BY LATE EVENING"
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "NIST-5 and BLEU-4 scores for training and test sets (average variation from the mean)."
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Scores for handcrafted system and two best pCRU-systems from two human experiments."
}
}
}
}