ACL-OCL / Base_JSON /prefixN /json /N03 /N03-1026.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:07:04.483229Z"
},
"title": "Statistical Sentence Condensation using Ambiguity Packing and Stochastic Disambiguation Methods for Lexical-Functional Grammar",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tracy",
"middle": [
"H"
],
"last": "King",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Richard",
"middle": [],
"last": "Crouch",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Annie",
"middle": [],
"last": "Zaenen",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an application of ambiguity packing and stochastic disambiguation techniques for Lexical-Functional Grammars (LFG) to the domain of sentence condensation. Our system incorporates a linguistic parser/generator for LFG, a transfer component for parse reduction operating on packed parse forests, and a maximum-entropy model for stochastic output selection. Furthermore, we propose the use of standard parser evaluation methods for automatically evaluating the summarization quality of sentence condensation systems. An experimental evaluation of summarization quality shows a close correlation between the automatic parse-based evaluation and a manual evaluation of generated strings. Overall summarization quality of the proposed system is state-of-the-art, with guaranteed grammaticality of the system output due to the use of a constraint-based parser/generator.",
"pdf_parse": {
"paper_id": "N03-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an application of ambiguity packing and stochastic disambiguation techniques for Lexical-Functional Grammars (LFG) to the domain of sentence condensation. Our system incorporates a linguistic parser/generator for LFG, a transfer component for parse reduction operating on packed parse forests, and a maximum-entropy model for stochastic output selection. Furthermore, we propose the use of standard parser evaluation methods for automatically evaluating the summarization quality of sentence condensation systems. An experimental evaluation of summarization quality shows a close correlation between the automatic parse-based evaluation and a manual evaluation of generated strings. Overall summarization quality of the proposed system is state-of-the-art, with guaranteed grammaticality of the system output due to the use of a constraint-based parser/generator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent work in statistical text summarization has put forward systems that do not merely extract and concatenate sentences, but learn how to generate new sentences from Summary, T ext tuples. Depending on the chosen task, such systems either generate single-sentence \"headlines\" for multi-sentence text (Witbrock and Mittal, 1999) , or they provide a sentence condensation module designed for combination with sentence extraction systems (Knight and Marcu, 2000; Jing, 2000) . The challenge for such systems is to guarantee the grammaticality and summarization quality of the system output, i.e. the generated sentences need to be syntactically wellformed and need to retain the most salient information of the original document. For example a sentence extraction system might choose a sentence like:",
"cite_spans": [
{
"start": 303,
"end": 330,
"text": "(Witbrock and Mittal, 1999)",
"ref_id": null
},
{
"start": 438,
"end": 462,
"text": "(Knight and Marcu, 2000;",
"ref_id": "BIBREF7"
},
{
"start": 463,
"end": 474,
"text": "Jing, 2000)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The UNIX operating system, with implementations from Apples to Crays, appears to have the advantage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "from a document, which could be condensed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "UNIX appears to have the advantage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the approach of Witbrock and Mittal (1999) , selection and ordering of summary terms is based on bagof-words models and n-grams. Such models may well produce summaries that are indicative of the original's content; however, n-gram models seem to be insufficient to guarantee grammatical well-formedness of the system output. To overcome this problem, linguistic parsing and generation systems are used in the sentence condensation approaches of Knight and Marcu (2000) and Jing (2000) . In these approaches, decisions about which material to include/delete in the sentence summaries do not rely on relative frequency information on words, but rather on probability models of subtree deletions that are learned from a corpus of parses for sentences and their summaries.",
"cite_spans": [
{
"start": 19,
"end": 45,
"text": "Witbrock and Mittal (1999)",
"ref_id": null
},
{
"start": 448,
"end": 471,
"text": "Knight and Marcu (2000)",
"ref_id": "BIBREF7"
},
{
"start": 476,
"end": 487,
"text": "Jing (2000)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A related area where linguistic parsing systems have been applied successfully is sentence simplification. Grefenstette (1998) presented a sentence reduction method that is based on finite-state technology for linguistic markup and selection, and Carroll et al. (1998) present a sentence simplification system based on linguistic parsing. However, these approaches do not employ statistical learning techniques to disambiguate simplification decisions, but iteratively apply symbolic reduction rules, producing a single output for each sentence.",
"cite_spans": [
{
"start": 107,
"end": 126,
"text": "Grefenstette (1998)",
"ref_id": "BIBREF4"
},
{
"start": 247,
"end": 268,
"text": "Carroll et al. (1998)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of our approach is to apply the fine-grained tools for stochastic Lexical-Functional Grammar (LFG) parsing to the task of sentence condensation. The system presented in this paper is conceptualized as a tool that can be used as a standalone system for sentence condensation or simplification, or in combination with sentence extraction for text-summarization beyond the sentence-level. In our system, to produce a condensed version of a sentence, the sentence is first parsed using a broad-coverage LFG grammar for English. The parser produces a set of functional (f )-structures for an ambiguous sentence in a packed format. It presents these to the transfer component in a single packed data structure that represents in one place the substructures shared by several different interpretations. The transfer component operates on these packed representations and modifies the parser output to produce reduced f -structures. The reduced f -structures are then filtered by the generator to determine syntactic well-formedness. A stochastic disambiguator using a maximum entropy model is trained on parsed and manually disambiguated f -structures for pairs of sentences and their condensations. Using the disambiguator, the string generated from the most probable reduced f -structure produced by the transfer system is chosen. In contrast to the approaches mentioned above, our system guarantees the grammaticality of generated strings through the use of a constraint-based generator for LFG which uses a slightly tighter version of the grammar than is used by the parser. As shown in an experimental evaluation, summarization quality of our system is high, due to the combination of linguistically fine-grained analysis tools and expressive stochastic disambiguation models.",
"cite_spans": [
{
"start": 75,
"end": 107,
"text": "Lexical-Functional Grammar (LFG)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A second goal of our approach is to apply the standard evaluation methods for parsing to an automatic evaluation of summarization quality for sentence condensation systems. Instead of deploying costly and non-reusable human evaluation, or using automatic evaluation methods based on word error rate or n-gram match, summarization quality can be evaluated directly and automatically by matching the reduced f -structures that were produced by the system against manually selected f -structures that were produced by parsing a set of manually created condensations. Such an evaluation only requires human labor for the construction and manual structural disambiguation of a reusable gold standard test set. Matching against the test set can be done automatically and rapidly, and is repeatable for development purposes and system comparison. As shown in an experimental evaluation, a close correspondence can be established for rankings produced by the f -structure based automatic evaluation and a manual evaluation of generated strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, each of the system components will be described in more detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Sentence Condensation in the LFG Framework",
"sec_num": "2"
},
{
"text": "In this project, a broad-coverage LFG grammar and parser for English was employed (see Riezler et al. (2002) ). The parser produces a set of context-free constituent (c-)structures and associated functional (f -)structures for each input sentence, represented in packed form (see Maxwell and Kaplan (1989) ). For sentence condensation we are only interested in the predicate-argument structures encoded in f -structures. For example, Fig. 1 shows an f -structure manually selected out of the 40 f -structures for the sentence:",
"cite_spans": [
{
"start": 87,
"end": 108,
"text": "Riezler et al. (2002)",
"ref_id": "BIBREF10"
},
{
"start": 280,
"end": 305,
"text": "Maxwell and Kaplan (1989)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 434,
"end": 440,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Parsing and Transfer",
"sec_num": "2.1"
},
{
"text": "A prototype is ready for testing, and Leary hopes to set requirements for a full system by the end of the year.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing and Transfer",
"sec_num": "2.1"
},
{
"text": "The transfer component for the sentence condensation system is based on a component previously used in a machine translation system (see Frank (1999) ). It consists of an ordered set of rules that rewrite one f -structure into another. Structures are broken down into flat lists of facts, and rules may add, delete, or change individual facts. Rules may be optional or obligatory. In the case of optional rules, transfer of a single input structure may lead to multiple alternate output structures. The transfer component is designed to operate on packed input from the parser and can also produce packed representations of the condensation alternatives, using methods adapted from parse packing. 1 An example rule that (optionally) removes an adjunct is shown below:",
"cite_spans": [
{
"start": 137,
"end": 149,
"text": "Frank (1999)",
"ref_id": "BIBREF2"
},
{
"start": 697,
"end": 698,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing and Transfer",
"sec_num": "2.1"
},
{
"text": "+adjunct(X,Y), in-set(Z,Y) ?=> delete-node(Z,r1), rule-trace(r1,del(Z,X)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing and Transfer",
"sec_num": "2.1"
},
{
"text": "This rule eliminates an adjunct, Z, by deleting the fact that Z is contained within the set of adjuncts, Y, associated with the expression X. The + before the adjunct(X,Y) fact marks this fact as one that needs to be present for the rule to be applied, but which is left unaltered by the rule application. The in-set(Z,Y) fact is deleted. Two new facts are added. delete-node(Z,r1) indicates that the structure rooted at node Z is to be deleted, and rule-trace(r1,del(Z,X)) adds a trace of this rule to an accumulating history of rule applications. This history records the relation of transferred f -structures to the original f -structure and is available for stochastic disambiguation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing and Transfer",
"sec_num": "2.1"
},
{
"text": "Rules used in the sentence condensation transfer system include the optional deletion of all intersective adjuncts (e.g., He slept in the bed. can become He slept., but He did not sleep. cannot become He did sleep. or He 1 The packing feature of the transfer component could not be employed in these experiments since the current interface to the generator and stochastic disambiguation component still requires unpacked representations.",
"cite_spans": [
{
"start": 221,
"end": 222,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing and Transfer",
"sec_num": "2.1"
},
{
"text": "\"A prototype is ready for testing , and Leary hopes to set requirements for a full system by the end of the year.\" 'be< [93:ready] slept.), the optional deletion of parts of coordinate structures (e.g., They laughed and giggled. can become They giggled.), and certain simplifications (e.g. It is clear that the earth is round. can become The earth is round. but It seems that he is asleep. cannot become He is asleep.). For example, one possible post-transfer output of the sentence in Fig. 1 is shown in Fig. 2 .",
"cite_spans": [
{
"start": 120,
"end": 130,
"text": "[93:ready]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 486,
"end": 492,
"text": "Fig. 1",
"ref_id": "FIGREF1"
},
{
"start": 505,
"end": 511,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Parsing and Transfer",
"sec_num": "2.1"
},
{
"text": "The transfer rules are independent of the grammar and are not constrained to preserve the grammaticality or wellformedness of the reduced f-structures. Some of the reduced structures therefore may not correspond to any English sentence, and these are eliminated from future consideration by using the generator as a filter. The filtering is done by running each transferred structure through the generator to see whether it produces an output string. If it does not, the structure is rejected. For example, for the f -structure in Fig. 1 condensations of the input sentence survive. The 16 wellformed structures correspond to the following strings that were outputted by the generator (note that a single structure may correspond to more than one string and a given string may correspond to more than one structure): In order to guarantee non-empty output for the overall condensation system, the generation component has to be fault-tolerant in cases where the transfer system operates on a fragmentary parse, or produces non-valid fstructures from valid input f -structures. Robustness techniques currently applied to the generator include insertion and deletion of features in order to match invalid transferoutput to the grammar rules and lexicon. Furthermore, repair mechanisms such as repairing subject-verb agreement from the subject's number value are employed. As a last resort, a fall-back mechanism to the original uncondensed f -structure is used. These techniques guarantee that a non-empty set of reduced f -structures yielding grammatical strings in generation is passed on to the next system component. In case of fragmentary input to the transfer component, grammaticaliy of the output is guaranteed for the separate fragments. In other words, strings generated from a reduced fragmentary f -structure will be as grammatical as the string that was fed into the parsing component.",
"cite_spans": [],
"ref_spans": [
{
"start": 531,
"end": 537,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Stochastic Selection and Generation",
"sec_num": "2.2"
},
{
"text": "A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Selection and Generation",
"sec_num": "2.2"
},
{
"text": "After filtering by the generator, the remaining fstructures were weighted by the stochastic disambiguation component. Similar to stochastic disambiguation for constraint-based parsing (Johnson et al., 1999; Riezler et al., 2002) , an exponential (a.k.a. log-linear or maximumentropy) probability model on transferred structures is estimated from a set of training data. The data for estimation consists of pairs of original sentences y and goldstandard summarized f -structures s which were manually selected from the transfer output for each sentence. For training data {(s j , y j )} m j=1 and a set of possible summarized structures S(y) for each sentence y, the objective was to maximize a discriminative criterion, namely the conditional likelihood L(\u03bb) of a summarized f -structure given the sentence. Optimization of the function shown below was performed using a conjugate gradient opti-mization routine:",
"cite_spans": [
{
"start": 184,
"end": 206,
"text": "(Johnson et al., 1999;",
"ref_id": "BIBREF6"
},
{
"start": 207,
"end": 228,
"text": "Riezler et al., 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Selection and Generation",
"sec_num": "2.2"
},
{
"text": "L(\u03bb) = log m j=1 e \u03bb\u2022f (sj ) s\u2208S(yj ) e \u03bb\u2022f (s) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Selection and Generation",
"sec_num": "2.2"
},
{
"text": "At the core of the exponential probability model is a vector of property-functions f to be weighted by parameters \u03bb. For the application of sentence condensation, 13,000 property-functions of roughly three categories were used:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Selection and Generation",
"sec_num": "2.2"
},
{
"text": "\u2022 Property-functions indicating attributes, attributecombinations, or attribute-value pairs for f -structure attributes (\u2248 1,000 properties)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Selection and Generation",
"sec_num": "2.2"
},
{
"text": "\u2022 Property-functions indicating co-occurences of verb stems and subcategorization frames (\u2248 12,000 properties)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Selection and Generation",
"sec_num": "2.2"
},
{
"text": "\u2022 Property-functions indicating transfer rules used to arrive at the reduced f -structures (\u2248 60 properties).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Selection and Generation",
"sec_num": "2.2"
},
{
"text": "A trained probability model is applied to unseen data by selecting the most probable transferred f -structure, yielding the string generated from the selected structure as the target condensation. The transfered f -structure chosen for our current example is shown in Fig. 3 . This structure was produced by the following set of transfer rules, where var refers to the indices in the representation of the f -structure: rtrace(r13,keep(var(98),of)), rtrace(r161,keep(system,var(85))), rtrace(r1,del(var(91),set,by)), rtrace(r1,del(var(53),be,for)), rtrace(r20,equal(var(1),and)), rtrace(r20,equal(var(2),and)), rtrace(r2,del(var(1),hope,and)), rtrace(r22,delb(var(0),and)).",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 274,
"text": "Fig. 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Stochastic Selection and Generation",
"sec_num": "2.2"
},
{
"text": "These rules delete the adjunct of the first conjunct (for testing), the adjunct of the second conjunct (by the end of the year), the rest of the second conjunct (Leary hopes to set requirements for a full system), and the conjunction itself (and).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Selection and Generation",
"sec_num": "2.2"
},
{
"text": "Evaluation of quality of sentence condensation systems, and of text summarization and simplification systems in general, has mostly been conducted as intrinsic evaluation by human experts. Recently, Papineni et al.'s (2001) proposal for an automatic evaluation of translation systems by measuring n-gram matches of the system output against reference examples has become popular for evaluation of summarization systems. In addition, an automatic evaluation method based on context-free deletion decisions has been proposed by Jing (2000) . However, for summarization systems that employ a linguistic parser as an integral system component, it is possible to employ the standard evaluation techniques for parsing directly to an evaluation of summarization quality. A parsingbased evaluation allows us to measure the semantic aspects of summarization quality in terms of grammaticalfunctional information provided by deep parsers. Furthermore, human expertise was necessary only for the creation of condensed versions of sentences, and for the manual disambiguation of parses assigned to those sentences. Given such a gold standard, summarization quality of a system can be evaluated automatically and repeatedly by matching the structures of the system output against the gold standard structures. The standard metrics of precision, recall, and F-score from statistical parsing can be used as evaluation metrics for measuring matching quality: Precision measures the number of matching structural items in the parses of the system output and the gold standard, out of all structural items in the system output's parse; recall measures the number of matches, out of all items in the gold standard's parse. F-score balances precision and recall as (2 \u00d7 precision \u00d7 recall)/(precision + recall). For the sentence condensation system presented above, the structural items to be matched consist of relation(predicate, argument) triples. For example, the goldstandard f -structure of Fig. 2 corresponds to 23 dependency relations, the first 14 of which are shared with the reduced f -structure chosen by the stochastic disambiguation system: tense(be:0, pres), mood(be:0, indicative), subj(be:0, prototype:2), xcomp(be:0, ready:1), stmt_type(be:0, declarative), vtype(be:0, copular), subj(ready:1, prototype:2), adegree(ready:1, positive), atype(ready:1, predicative), det(prototype:2, a:7), num(prototype:2, sg), pers(prototype:2, 3), det_form(a:7, a), det_type(a:7, indef), adjunct(be:0, for:12), obj(for:12, test:14), adv_type(for:12, vpadv), psem(for:12, unspecified), ptype(for:12, semantic), num(test:14, sg), pers(test:14, 3), pform(test:14, for), vtype(test:14, main).",
"cite_spans": [
{
"start": 199,
"end": 223,
"text": "Papineni et al.'s (2001)",
"ref_id": null
},
{
"start": 526,
"end": 537,
"text": "Jing (2000)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 1977,
"end": 1983,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "A Method for Automatic Evaluation of Sentence Summarization",
"sec_num": "3"
},
{
"text": "Matching these f -structures against each other corresponds to a precision of 1, recall of .61, and F-score of .76.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Method for Automatic Evaluation of Sentence Summarization",
"sec_num": "3"
},
{
"text": "The fact that our method does not rely on a comparison of the characteristics of surface strings is a clear advantage. Such comparisons are bad at handling examples which are similar in meaning but differ in word order or vary structurally, such as in passivization or nominalization. Our method handles such examples straightforwardly. Fig. 4 shows two serialization variants of the condensed sentence of Fig. 2 . The f -structures for these examples are similar to the f -structure assigned to the gold standard condensation shown in Fig. 2 (except for the relations ADJUNT-TYPE:parenthetical versus ADV-TYPE:vpadv versus ADV-TYPE:sadv). An evaluation of summarization quality that is based on matching f -structures will treat these examples equally, whereas an evaluation based on string matching will yield different quality scores for different serializations. In the next section, we present experimental results of an automatic evaluation of the sentence condensation system described above. These results show a close correspondence between automatically produced evaluation results and human judgments on the quality of generated condensed strings.",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 343,
"text": "Fig. 4",
"ref_id": "FIGREF5"
},
{
"start": 406,
"end": 412,
"text": "Fig. 2",
"ref_id": "FIGREF2"
},
{
"start": 536,
"end": 542,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "A Method for Automatic Evaluation of Sentence Summarization",
"sec_num": "3"
},
{
"text": "The sentences and condensations we used are taken from data for the experiments of Knight and Marcu (2000) , which were provided to us by Daniel Marcu. These data consist of pairs of sentences and their condensed versions that have been extracted from computer-news articles and abstracts of the Ziff-Davis corpus. Out of these data, we parsed and manually disambiguated 500 sentence pairs. These included a set of 32 sentence pairs that were used for testing purposes in Knight and Marcu (2000) . In order to control for the small corpus size of this test set, we randomly extracted an additional 32 sentence pairs from the 500 parsed and disambiguated examples as a second test set. The rest of the 436 randomly selected sentence pairs were used to create training data. For the purpose of discriminative training, a gold-standard of transferred f -structures was created from the transfer output and the manually selected f -structures for the condensed strings. This was done automatically by selecting for each example the transferred f -structure that best matched the fstructure annotated for the condensed string.",
"cite_spans": [
{
"start": 83,
"end": 106,
"text": "Knight and Marcu (2000)",
"ref_id": "BIBREF7"
},
{
"start": 472,
"end": 495,
"text": "Knight and Marcu (2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "In the automatic evaluation of f -structure match, three different system variants were compared. Firstly, randomly chosen transferred f -structures were matched against the manually selected f -structures for the manually created condensations. This evaluation constitutes a lower bound on the F-score against the given gold standard. Secondly, matching results for transferred fstructures yielding the maximal F-score against the gold standard were recorded, giving an upper bound for the system. Thirdly, the performance of the stochastic model within the range of the lower bound and upper bound was measured by recording the F-score for the f -structure that received highest probability according to the learned distribution on transferred structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "In order to make our results comparable to the results of Knight and Marcu (2000) and also to investigate the correspondence between the automatic evaluation and human judgments, a manual evaluation of the strings generated by these system variants was conducted. Two human judges were presented with the uncondensed surface string and five condensed strings that were displayed in random order for each test example. The five condensed strings presented to the human judges contained (1) strings generated from three randomly selected fstructures, (2) the strings generated from the f -structures which were selected by the stochastic model, and (3) the manually created gold-standard condensations extracted from the Ziff-Davis abstracts. The judges were asked to judge summarization quality on a scale of increasing quality from 1 to 5 by assessing how well the generated strings retained the most salient information of the original uncondensed sentences. Grammaticality of the system output is optimal and not reported separately. Results for both evaluations are reported for two test corpora of 32 examples each. Testset I contains the sentences and condensations used to evaluate the system described in Knight and Marcu (2000) . Testset II consists of another randomly extracted 32 sentence pairs from the same domain, prepared in the same way. Fig. 5 shows evaluation results for a sentence condensation run that uses manually selected f -structures for the original sentences as input to the transfer component. These results demonstrate how the condenstation system performs under the optimal circumstances when the parse chosen as input is the best available. Fig. 6 applies the same evaluation data and metrics to a sentence condensation experiment that performs transfer from packed fstructures, i.e. transfer is performed on all parses for an ambiguous sentence instead of on a single manually selected parse. Alternatively, a single input parse could be selected by stochastic models such as the one described in Riezler et al. (2002) . A separate phase of parse disambiguation, and perhaps the effects of any errors that this might introduce, can be avoided by transferring from all parses for an ambiguous sentence. This approach is computationally feasible, however, only if condensation can be carried all the way through without unpacking. Our technology is not yet able to do this (in particular, as mentioned earlier, we have not yet implemented a method for stochastic disambiguation on packed f -structures). However, we conducted a preliminary assessment of this possibility by unpacking and enumerating the transferred fstructures. For many sentences this resulted in more candidates than we could operate on in the available time and space, and in those cases we arbitrarily set a cut-off on the number of transferred f -structures we considered. Since transferred f -structures are produced according to the number of rules applied to transfer them, in this setup the transfer system produces smaller f -structures first, and cuts off less condensed output. The result of this experiment, shown in Fig. 6 , thus provides a conservative estimate on the quality of the condensations we might achieve with a full-packing implementation.",
"cite_spans": [
{
"start": 58,
"end": 81,
"text": "Knight and Marcu (2000)",
"ref_id": "BIBREF7"
},
{
"start": 1212,
"end": 1235,
"text": "Knight and Marcu (2000)",
"ref_id": "BIBREF7"
},
{
"start": 2030,
"end": 2051,
"text": "Riezler et al. (2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 1354,
"end": 1360,
"text": "Fig. 5",
"ref_id": "FIGREF6"
},
{
"start": 1673,
"end": 1679,
"text": "Fig. 6",
"ref_id": null
},
{
"start": 3128,
"end": 3134,
"text": "Fig. 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "In Figs. 5 and 6, the first row shows F-scores for a random selection, the system selection, and the best possible selection from the transfer output against the gold standard. The second rows show summarization quality scores for generations from a random selection and the system selection, and for the human-written condensation. The third rows report compression ratios. be seen from these tables, the ranking of system variants produced by the automatic and manual evaluation confirm a close correlation between the automatic evaluation and human judgments. A comparison of evaluation results across colums, i.e. across selection variants, shows that a stochastic selection of transferred f -structures is indeed important. Even if all f -structures are transferred from the same linguistically rich source, and all generated strings are grammatical, a reduction in error rate of around 50% relative to the upper bound can be achieved by stochastic selection. In contrast, a comparison between transfer runs with and without perfect disambiguation of the original string shows a decrease of about 5% in F-score, and of only .1 points for summarization quality when transferring from packed parses instead of from the manually selected parse. This shows that it is more important to learn what a good transferred f -structure looks like than to have a perfect f -structure to transfer from. The compression rates associated with the systems that used stochastic selection is around 60%, which is acceptable, but not as aggressive as human-written condensations. Note that in our current implementation, in some cases the transfer component was unable to operate on the packed representation. In those cases a parse was chosen at random as a conservative estimate of transfer from all parses. This fall-back mechanism explains the drop in F-score for the upper bound in comparing Figs. 5 and 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "We presented an approach to sentence condensation that employs linguistically rich LFG grammars in a parsing/generation-based stochastic sentence condensation system. Fine-grained dependency structures are output by the parser, then modified by a highly expressive transfer system, and filtered by a constraint-based generator. 50.9% 60.0% 56.8% Figure 6 : Sentence condensation from packed fstructures for original uncondensed sentences. quality of the system output is state-of-the-art, and grammaticality of condensed strings is guaranteed. Robustness techniques for parsing and generation guarantee that the system produces non-empty output for unseen input. Overall, the summarization quality achieved by our system is similar to the results reported in Knight and Marcu (2000) . This might seem disappointing considering the more complex machinery employed in our approach. It has to be noted that these results are partially due to the somewhat artificial nature of the data that were used in the experiments of Knight and Marcu (2000) and therefore in our experiments: The human-written condensations in the data set extracted from the Ziff-Davis corpus show the same word order as the original sentences and do not exhibit any structural modification that are common in humanwritten summaries. For example, humans tend to make use of structural modifications such as nominalization and verb alternations such as active/passive or transitive/intransitive alternations in condensation. Such alternations can easily be expressed in our transfer-based approach, whereas they impose severe problems to approaches that operate only on phrase structure trees. In the given test set, however, the condensation task restricted to the operation of deletion. A creation of additional condensations for the original sentences other than the condensed versions extracted from the human-written abstracts would provide a more diverse test set, and furthermore make it possible to match each system output against any number of independent human-written condensations of the same original sentence. This idea of computing matching scores to multiple reference examples was proposed by Alshawi et al. (1998) , and later by Papineni et al. (2001) for evaluation of machine translation systems. Similar to these proposals, an evaluation of condensation quality could consider multiple reference condensations and record the matching score against the most similar example.",
"cite_spans": [
{
"start": 759,
"end": 782,
"text": "Knight and Marcu (2000)",
"ref_id": "BIBREF7"
},
{
"start": 1019,
"end": 1042,
"text": "Knight and Marcu (2000)",
"ref_id": "BIBREF7"
},
{
"start": 2179,
"end": 2200,
"text": "Alshawi et al. (1998)",
"ref_id": "BIBREF0"
},
{
"start": 2216,
"end": 2238,
"text": "Papineni et al. (2001)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 346,
"end": 354,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Another desideratum for future work is to carry condensation all the way through without unpacking at any stage. Work on employing packing techniques not only for parsing and transfer, but also for generation and stochastic selection is currently underway (see Geman and Johnson (2002) ). This will eventually lead to a system whose components work on packed representations of all or n-best solutions, but completely avoid costly unpacking of representations.",
"cite_spans": [
{
"start": 261,
"end": 285,
"text": "Geman and Johnson (2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Michael J. Witbrock and Vibhu O. Mittal. 1999. Ultrasummarization: A statistical approach to generating highly condensed non-extractive summaries. In Proceedings of the 22nd ACM SIGIR Conference on Research and Development in Information Retrieval, Berkeley, CA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic acquisition of hierarchical transduction models for machine translation",
"authors": [
{
"first": "Hiyan",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Shona",
"middle": [
"Douglas"
],
"last": "",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics (ACL'98)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 1998. Automatic acquisition of hierarchical trans- duction models for machine translation. In Proceed- ings of the 36th Annual Meeting of the Association for Computational Linguistics (ACL'98), Montreal, Que- bec, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Practical simplification of english newspaper text to assist aphasic readers",
"authors": [
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Minnen",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Canning",
"suffix": ""
},
{
"first": "Siobhan",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Tait",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the AAAI Workshop on Integrating Artificial Intelligence and Assistive Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Carroll, Guido Minnen, Yvonne Canning, Siobhan Devlin, and John Tait. 1998. Practical simplification of english newspaper text to assist aphasic readers. In Proceedings of the AAAI Workshop on Integrating Arti- ficial Intelligence and Assistive Technology, Madison, WI.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "From parallel grammar development towards machine translation",
"authors": [
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the MT Summit VII. MT in the Great Translation Era",
"volume": "",
"issue": "",
"pages": "134--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anette Frank. 1999. From parallel grammar develop- ment towards machine translation. In Proceedings of the MT Summit VII. MT in the Great Translation Era, pages 134-142. Kent Ridge Digital Labs, Singapore.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Dynamic programming for parsing and estimation of stochastic unification-based grammars",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL'02)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Geman and Mark Johnson. 2002. Dynamic programming for parsing and estimation of stochas- tic unification-based grammars. In Proceedings of the 40th Annual Meeting of the Association for Computa- tional Linguistics (ACL'02), Philadelphia, PA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Producing intelligent telegraphic text reduction to provide an audio scanning service for the blind",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the AAAI Spring Workshop on Intelligent Text Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Grefenstette. 1998. Producing intelligent tele- graphic text reduction to provide an audio scanning service for the blind. In Proceedings of the AAAI Spring Workshop on Intelligent Text Summarization, Stanford, CA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sentence reduction for automatic text summarization",
"authors": [
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 6th Applied Natural Language Processing Conference (ANLP'00)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyan Jing. 2000. Sentence reduction for automatic text summarization. In Proceedings of the 6th Applied Natural Language Processing Conference (ANLP'00), Seattle, WA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Estimators for stochastic \"unification-based\" grammars",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Canon",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Chi",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL'99)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic \"unification-based\" grammars. In Proceedings of the 37th Annual Meeting of the Association for Computa- tional Linguistics (ACL'99), College Park, MD.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Statistics-based summarization-step one: Sentence compression",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 17th National Conference on Artificial Intelligence (AAAI-2000)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight and Daniel Marcu. 2000. Statistics-based summarization-step one: Sentence compression. In Proceedings of the 17th National Conference on Arti- ficial Intelligence (AAAI-2000), Austin, TX.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An overview of disjunctive constraint satisfaction",
"authors": [
{
"first": "John",
"middle": [],
"last": "Maxwell",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"M"
],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Maxwell and Ronald M. Kaplan. 1989. An overview of disjunctive constraint satisfaction. In Pro- ceedings of the International Workshop on Parsing Technologies, Pittsburgh, PA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. Bleu: a method for automatic evalua- tion of machine translation. Technical Report IBM Re- search Division Technical Report, RC22176 (W0190- 022), Yorktown Heights, N.Y.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Tracy",
"middle": [
"H"
],
"last": "King",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"M"
],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Crouch",
"suffix": ""
},
{
"first": "John",
"middle": [
"T"
],
"last": "Maxwell",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL'02)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell, and Mark John- son. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative esti- mation techniques. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguis- tics (ACL'02), Philadelphia, PA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "PERF \u2212_, PROG \u2212_, TENSE pres TNS\u2212ASP PASSIVE \u2212, STMT\u2212TYPE decl, VTYPE main 252 COORD +_, COORD\u2212FORM and, COORD\u2212LEVEL ROOT 197",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "F -structure for non-condensed sentence.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Gold standard f -structure reduction.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "PERF \u2212 _ , PROG \u2212 _ , TENSE pres \u00a1 TNS\u2212ASP PASSIVE \u2212, STMT\u2212TYPE decl, VTYPE copular 73",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Transferred f -structure chosen by system.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "F -structure for word-order variants of gold standard condensation.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "Sentence condensation from manually selected f -structure for original uncondensed sentences.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": ", the transfer system proposed 32 possible reductions. After filtering these structures by generation, 16 reduced f -structures comprising possible",
"num": null,
"content": "<table><tr><td colspan=\"7\">\"A prototype is ready for testing.\"</td></tr><tr><td>PRED \u00a1</td><td colspan=\"6\">'be &lt;[93:ready]&gt;[30:prototype]'</td></tr><tr><td/><td colspan=\"2\">PRED \u00a1</td><td/><td colspan=\"3\">'prototype \u00a2 '</td></tr><tr><td/><td colspan=\"3\">NTYPE</td><td colspan=\"3\">GRAIN</td><td>count</td></tr><tr><td>SUBJ</td><td colspan=\"2\">SPEC</td><td/><td colspan=\"2\">DET \u00a3</td><td>'a' DET\u2212FORM \u00a3 PRED \u00a1</td><td>a, DET\u2212TYPE \u00a3</td><td>indef</td></tr><tr><td>30</td><td colspan=\"6\">CASE nom, NUM sg, PERS 3</td></tr><tr><td/><td colspan=\"2\">PRED</td><td/><td colspan=\"3\">'ready&lt;[30:prototype]&gt;'</td></tr><tr><td>XCOMP</td><td colspan=\"2\">SUBJ</td><td/><td colspan=\"3\">[30:prototype]</td></tr><tr><td>93</td><td colspan=\"4\">ADEGREE \u00a4</td><td colspan=\"2\">positive \u00a2 , ATYPE \u00a4</td><td>predicative \u00a2</td></tr><tr><td/><td/><td colspan=\"3\">PRED</td><td colspan=\"2\">'for&lt;[141:test]&gt;'</td></tr><tr><td/><td/><td/><td/><td/><td/><td>PRED</td><td>'test \u00a5 '</td></tr><tr><td>ADJUNCT \u00a4</td><td/><td colspan=\"3\">OBJ</td><td/><td>NTYPE \u00a6</td><td>GRAIN</td><td>gerund</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">141</td><td>CASE acc, NUM sg, PERS 3, PFORM for, VTYPE main</td></tr><tr><td/><td>125</td><td colspan=\"5\">ADV\u2212TYPE \u00a4</td><td>vpadv \u00a7 , PSEM \u00a1</td><td>unspecified</td><td>, PTYPE \u00a1</td><td>sem</td></tr><tr><td>TNS\u2212ASP</td><td colspan=\"2\">MOOD \u00a9</td><td colspan=\"4\">indicative, PERF \u00a1</td><td>\u2212_, PROG \u00a1</td><td>\u2212_, TENSE pres \u00a2</td></tr><tr><td colspan=\"7\">PASSIVE \u2212, STMT\u2212TYPE decl, VTYPE copular</td></tr></table>",
"html": null,
"type_str": "table"
}
}
}
}