ACL-OCL / Base_JSON /prefixH /json /H05 /H05-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:34:10.989522Z"
},
"title": "Improving Multilingual Summarization: Using Redundancy in the Input to Correct MT errors",
"authors": [
{
"first": "Advaith",
"middle": [],
"last": "Siddharthan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {
"addrLine": "1214 Amsterdam Avenue",
"postCode": "10027",
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {
"addrLine": "1214 Amsterdam Avenue",
"postCode": "10027",
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we use the information redundancy in multilingual input to correct errors in machine translation and thus improve the quality of multilingual summaries. We consider the case of multidocument summarization, where the input documents are in Arabic, and the output summary is in English. Typically, information that makes it to a summary appears in many different lexical-syntactic forms in the input documents. Further, the use of multiple machine translation systems provides yet more redundancy, yielding different ways to realize that information in English. We demonstrate how errors in the machine translations of the input Arabic documents can be corrected by identifying and generating from such redundancy, focusing on noun phrases.",
"pdf_parse": {
"paper_id": "H05-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we use the information redundancy in multilingual input to correct errors in machine translation and thus improve the quality of multilingual summaries. We consider the case of multidocument summarization, where the input documents are in Arabic, and the output summary is in English. Typically, information that makes it to a summary appears in many different lexical-syntactic forms in the input documents. Further, the use of multiple machine translation systems provides yet more redundancy, yielding different ways to realize that information in English. We demonstrate how errors in the machine translations of the input Arabic documents can be corrected by identifying and generating from such redundancy, focusing on noun phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Multilingual summarization is a relatively nascent research area which has, to date, been addressed through adaptation of existing extractive English document summarizers. Some systems (e.g. SUM-MARIST (Hovy and Lin, 1999) ) extract sentences from documents in a variety of languages, and translate the resulting summary. Other systems (e.g. Newsblaster (Blair-Goldensohn et al., 2004) ) perform translation before sentence extraction. Readability is a major issue for these extractive systems. The output of machine translation software is usually errorful, especially so for language pairs such as Chinese or Arabic and English. The ungrammaticality and inappropriate word choices resulting from the use of MT systems leads to machine summaries that are difficult to read.",
"cite_spans": [
{
"start": 202,
"end": 222,
"text": "(Hovy and Lin, 1999)",
"ref_id": "BIBREF8"
},
{
"start": 354,
"end": 385,
"text": "(Blair-Goldensohn et al., 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multi-document summarization, however, has information available that was not available during the translation process and which can be used to improve summary quality. A multi-document summarizer is given a set of documents on the same event or topic. This set provides redundancy; for example, each document may refer to the same entity, sometimes in different ways. It is possible that by examining many translations of references to the same entity, a system can gather enough accurate information to improve the translated reference in the summary. Further, as a summary is short and serves as a surrogate for a large set of documents, it is worth investing more resources in its translation; readable summaries can help end users decide which documents they want to spend time deciphering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Current extractive approaches to summarization are limited in the extent to which they address quality issues when the input is noisy. Some new systems attempt substituting sentences or clauses in the summary with similar text from extraneous but topic related English documents (Blair-Goldensohn et al., 2004) . This improves readability, but can only be used in limited circumstances, in order to avoid substituting an English sentence that is not faithful to the original. Evans and McKeown (2005) consider the task of summarizing a mixed data set that contains both English and Arabic news reports. Their approach is to separately summarize information that is contained in only English reports, only Arabic reports, and in both. While the only-English and in-both information can be summarized by selecting text from English reports, the summaries of only-Arabic suffer from the same readability issues.",
"cite_spans": [
{
"start": 279,
"end": 310,
"text": "(Blair-Goldensohn et al., 2004)",
"ref_id": "BIBREF3"
},
{
"start": 476,
"end": 500,
"text": "Evans and McKeown (2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we use principles from information theory (Shannon, 1948) to address the issue of readability in multilingual summarization. We take as input, multiple machine translations into English of a cluster of news reports in Arabic. This input is characterized by high levels of linguistic noise and by high levels of information redundancy (multiple documents on the same or related topics and multiple translations into English). Our aim is to use automatically acquired knowledge about the English language in conjunction with the information redundancy to perform error correction on the MT. The main benefit of our approach is to make machine summaries of errorful input easier to read and comprehend for end-users. We focus on noun phrases in this paper. The amount of error correction possible depends on the amount of redundancy in the input and the depth of knowledge about English that we can utilize. We begin by tackling the problem of generating references to people in English summaries of Arabic texts (\u00a2 2). This special case involves large amounts of redundancy and allows for relatively deep English language modeling, resulting in good error correction. We extend our approach to arbitrary NPs in \u00a2 3 . The evaluation emphasis in multi-document summarization has been on evaluating content (not readability), using manual as well as automatic (Lin and Hovy, 2003) methods. We evaluate readability of the generated noun phrases by computing precision, recall and fmeasure of the generated version compared to multiple human models of the same reference, computing these metrics on n-grams. Our results show that our system performs significantly better on precision over two baselines (most frequent initial reference and randomly chosen initial reference). Precision is the most important of these measures as it is important to have a correct reference, even if we don't retain all of the words used in the human models.",
"cite_spans": [
{
"start": 57,
"end": 72,
"text": "(Shannon, 1948)",
"ref_id": "BIBREF13"
},
{
"start": 1370,
"end": 1390,
"text": "(Lin and Hovy, 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We used data from the DUC 2004 Multilingual summarization task. The Document Understanding Conference (http://duc.nist.gov) has been run annually since 2001 and is the biggest summarization evaluation effort, with participants from all over the world. In 2004, for the first time, there was a multi-lingual multi-document summarization task. There were 25 sets to be summarized. For each set consisting of 10 Arabic news reports, the participants were provided with 2 different machine translations into English (using translation software from ISI and IBM). The data provided under DUC includes 4 human summaries for each set for evaluation purposes; the human summarizers were provided a human translation into English of each of the Arabic New reports, and did not have to read the MT output that the machine summarizers took as input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "An analysis of premodification in initial references to people in DUC human summaries for the monolingual task from 2001-2004 showed that 71% of premodifying words were either title or role words (eg. Prime Minister, Physicist or Dr.) or temporal role modifying adjectives such as former or designate. Country, state, location or organization names constituted 22% of premodifying words. All other kinds of premodifying words, such as moderate or loyal constitute only 7%. Thus, assuming the same pattern in human summaries for the multilingual task (cf. section 2.6 on evaluation), our task for each person referred to in a document set is to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task definition",
"sec_num": "2.2"
},
{
"text": "1. Collect all references to the person in both translations of each document in the set. 2. Identify the correct roles (including temporal modification) and affiliations for that person, filtering any noise. 3. Generate a reference using the above attributes and the person's name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task definition",
"sec_num": "2.2"
},
{
"text": "As the task definition above suggests, our approach is to identify particular semantic attributes for a person, and generate a reference formally from this semantic input. Our analysis of human summaries tells us that the semantic attributes we need to identify are role, organization, country, state, location and temporal modifier. In addition, we also need to identify the person name. We used BBN's IDENTIFINDER (Bikel et al., 1999) to mark up person names, organizations and locations. We marked up countries and (American) states using a list obtained from the CIA factsheet 1 .",
"cite_spans": [
{
"start": 416,
"end": 436,
"text": "(Bikel et al., 1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic semantic tagging",
"sec_num": "2.3"
},
{
"text": "To mark up roles, we used a list derived from Word-Net (Miller et al., 1993) hyponyms of the person synset. Our list has 2371 entries including multiword expressions such as chancellor of the exchequer, brother in law, senior vice president etc. The list is quite comprehensive and includes roles from the fields of sports, politics, religion, military, business and many others. We also used WordNet to obtain a list of 58 temporal adjectives. WordNet classifies these as pre-(eg. occasional, former, incoming etc.) or post-nominal (eg. elect, designate, emeritus etc.). This information is used during generation. Further, we identified elementary noun phrases using the LT TTT noun chunker (Grover et al., 2000) , and combined NP of NP sequences into one complex noun phrase. An example of the output of our semantic tagging module on a portion of machine translated text follows:",
"cite_spans": [
{
"start": 55,
"end": 76,
"text": "(Miller et al., 1993)",
"ref_id": "BIBREF10"
},
{
"start": 693,
"end": 714,
"text": "(Grover et al., 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic semantic tagging",
"sec_num": "2.3"
},
{
"text": "...\u00a3 NP\u00a4 \u00a3 ROLE\u00a4 representative \u00a3 \u00a6 \u00a5 R OLE\u00a4 of \u00a3 COUNTRY\u00a4 Iraq \u00a3 \u00a6 \u00a5 COUNTRY\u00a4 of the \u00a3 ORG\u00a4 United Nations \u00a3 \u00a6 \u00a5 ORG\u00a4 \u00a7 \u00a3 PERSON\u00a4 Nizar Hamdoon \u00a3 \u00a6 \u00a5 PERSON\u00a4\u00a3 \u00a6 \u00a5 NP\u00a4 that \u00a3 NP\u00a4 thousands of people \u00a3 \u00a6 \u00a5 NP\u00a4 killed or wounded in \u00a3 NP\u00a4 the \u00a3 TIME\u00a4 next \u00a3 \u00a6 \u00a5 TIME\u00a4 few days four of the aerial bombardment of \u00a3 COUNTRY\u00a4 Iraq \u00a3 \u00a6 \u00a5 COUNTRY\u00a4 \u00a9 \u00a3 \u00a6 \u00a5 N P\u00a4 ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic semantic tagging",
"sec_num": "2.3"
},
{
"text": "Our principle data structure for this experiment is the attribute value matrix (AVM). For example, we create the following AVM for the reference to Nizar Hamdoon in the tagged example above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic semantic tagging",
"sec_num": "2.3"
},
{
"text": "! # \" % $ ' & ) ( 1 0 # & 3 2 1 4 6 5 7 5 9 8 @ 7 A C B ( E D F G ( E D H I D P 8 Q R & S Q E \" % T C D U 3 A ) V 7 W @ 9 X Y ( R & 9 a c b d S e g f S h A ) @ C i ) G p S q 3 W 6 p A r # 8 \" s Q E D ' 4 t ! # & ) Q E \" % 5 9 8 G H a c b 6 d S e G u 9 h v x w y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic semantic tagging",
"sec_num": "2.3"
},
{
"text": "Note that we store the relative positions (arg 1 and arg 2) of the country and organization attributes. This information is used both for error reduction and for generation as detailed below. We also replace adjectival country attributes with the country name, using the correspondence in the CIA factsheet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic semantic tagging",
"sec_num": "2.3"
},
{
"text": "We perform coreference by comparing AVMs. Because of the noise present in MT (For example, words might be missing, or proper names might be spelled differently by different MT systems), simple name comparison is not sufficient. We form a coreference link between two AVMs if:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "1. The last name and (if present) the first name match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "2. OR, if the role, country, organization and time attributes are the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "The assumption is that in a document set to be summarized (which consists of related news reports), references to people with the same affiliation and role are likely to be references to the same person, even if the names do not match due to spelling errors. Thus we form one AVM for each person, by combining AVMs. For Nizar Hamdoon, to whom there is only one reference in the set (and thus two MT versions), we obtain the AVM:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "6 P ! \u00a6 \" % $ ' & ) ( 0 # & 3 2 1 4 6 5 7 5 9 8 a u 3 h @ A 9 B ( E D P F G ( E D H I D 8 7 Q R & S Q E \" x T 9 D a u 9 h U 9 A S V 7 W @ C X Y ( R & 9 a u 9 h a c b 6 d S e g f S h A 3 @ 9 i ) G p ) q 7 3 W 6 p A r \u00a6 8 \" s Q E D ' 4 t ! & S Q E \" % 5 9 8 H a u 3 h a c b d ) e G u 3 h v x w y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "where the numbers in brackets represents the counts of this value across all references. The arg values now represent the most frequent ordering of these organizations and countries in the input references. As an example of a combined AVM for a person with a lot of references, consider:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "D P ( E 5 3 & ) a u S 7 h \" x & 3 2 \" % 8 G D g D P ( E 5 9 & 3 a u 3 9 h @ 7 A C B F G ( E D H I \" x 4 G D P 8 Q a u ) C h % D ' & 9 4 6 D P ( a u 9 h U 3 A ) V 7 W @ 9 X % 9 D P ( E \" x & a d f ' e C h a c b d S e g f S h A ) @ C i ) G p S q 3 W 6 p A f \u00a6 D 8 G 5 S T 9 & S Q E \" x 5 3 8 \u00a7 g h & ) ( I Q j i a u 3 h k a c b d ) e g f ' h # l g a d f S h k a c b d S e g f S h W p P m n 5 3 ( E 2 D ( a d f S h v x w w w w y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "This example displays common problems when generating a reference. Zeroual has two affiliations -Leader of the Renovation Party, and Algerian President. There is additional noise -the values AFP and former are most likely errors. As none of the organization or country values occur in the same reference, all are marked arg1; no relative ordering statistics are derivable from the input. For an example demonstrating noise in spelling, consider:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "o & 3 2 2 1 & S ( 1 p & 3 4 4 & S q a d f C h o & 3 2 2 1 & S ( 1 r & 3 4 4 & S q a d f C h p & 9 4 4 G & ) q a c 7 h r & 3 4 4 G & ) q a c C h @ 7 A C B x D & 9 4 G D ( s 5 9 % 5 3 8 D a d f S u 3 h s 5 3 x 5 3 8 D a c C h x D & 9 4 G D ( a t C h 2 \" x 8 G \" % H d Q E D P ( a u 9 h n u H d Q E \" % s D a d f ' h U 3 A ) V 7 W @ 9 X \" % v 7 i & a w 3 h a c b d ) e g f ' h A ) @ C i ) G p S q 3 W 6 p A g x D ' & 3 s D y z 5 9 8 7 Q I ( I i a u 9 h 1 a c b d ) e G u 3 h y z 5 9 8 7 Q I ( I i \u00a7 g x D ' & ) s D a d f ' h a c b 6 d S e g f S h v x w w w w w w w y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "Our approach to removing noise is to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "1. Select the most frequent name with more than one word (this is the most likely full name). 2. Select the most frequent role. 3. Prune the AVM of values that occur with a frequency below an empirically determined threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "Thus we obtain the following AVMs for the three examples above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "! # \" % $ ' & ) ( 1 0 # & 3 2 1 4 6 5 7 5 9 8 @ 7 A C B ( E D F G ( E D H I D P 8 Q R & S Q E \" % T C D U 3 A ) V 7 W @ 9 X Y ( R & 9 a c b d S e g f S h A ) @ C i ) G p S q 3 W 6 p A r # 8 \" s Q E D ' 4 t ! # & ) Q E \" % 5 9 8 G H a c b 6 d S e G u 9 h v x w y { \" x & 3 2 \" % 8 G D g D P ( E 5 9 & 3 @ 7 A C B F G ( E D H I \" x 4 6 D 8 7 Q U 3 A ) V 7 W @ 9 X | % 9 D ( E \" } & a c b d ) e g f ' h { o & ) 2 2 1 & ) ( 1 p & 3 4 4 G & ) q @ 7 A C B % D ' & 9 4 6 D P ( s 5 3 x 5 3 8 D U 3 A ) V 7 W @ 9 X \" % v 7 i & a c b d ) e g f ' h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "This is the input semantics for our generation module described in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying redundancy and filtering noise",
"sec_num": "2.4"
},
{
"text": "In order to generate a reference from the words in an AVM, we need knowledge about syntax. The syntactic frame of a reference to a person is determined by the role. Our approach is to automatically acquire these frames from a corpus of English text. We used the Reuters News corpus for extracting frames. We performed the semantic analysis of the corpus, as in \u00a2 2 .3; syntactic frames were extracted by identifying sequences involving locations, organizations, countries, roles and prepositions. An example of automatically acquired frames with their maximum likelihood probabilities for the role ambassador is: These frames provide us with the required syntactic information to generate from, including word order and choice of preposition. We select the most probable frame that matches the semantic attributes in the AVM. We also use a default set of frames shown below for instances where no automatically acquired frames exist:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating references from AVMs",
"sec_num": "2.5"
},
{
"text": "ROLE=\u00a3 Default\u00a4 COUNTRY ROLE PERSON ORG ROLE PERSON COUNTRY ORG ROLE PERSON ROLE PERSON",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating references from AVMs",
"sec_num": "2.5"
},
{
"text": "If no frame matches, organizations, countries and locations are dropped one by one in decreasing order of argument number, until a matching frame is found. After a frame is selected, any prenominal temporal adjectives in the AVM are inserted to the left of the frame, and any postnominal temporal adjectives are inserted to the immediate right of the role in the frame. Country names that are not objects of a preposition are replaced by their adjectival forms (using the correspondences in the CIA factsheet). For the AVMs above, our generation module produces the following referring expressions: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating references from AVMs",
"sec_num": "2.5"
},
{
"text": "To evaluate the referring expressions generated by our program, we used the manual translation of each document provided by DUC. The drawback of using a summarization corpus is that only one human translation is provided for each document, while multiple model references are required for automatic evaluation. We created multiple model references by using the initial references to a person in the manual translation of each input document in the set in which that person was referenced. We calculated unigram, bigram, trigram and fourgram precision, recall and f-measure for our generated references evaluated against multiple models from the manual translations. To illustrate the scoring, consider evaluating a generated phrase \"a b d\" against three model references \"a b c d\", \"a b c\" and \"b c d\". The bigram precision is 6 x x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.6"
},
{
"text": "(one out of two bigrams in generated phrase occurs in the model set), bigram recall is g x x g g (two out of 7 bigrams in the models occurs in the generated phrase) and f-measure",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.6"
},
{
"text": "( 3 x } \u00a9 ) is x x g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.6"
},
{
"text": ". For fourgrams, P, R and F are zero, as there is a fourgram in the models, but none in the generated NP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.6"
},
{
"text": "We used 6 document sets from DUC'04 for development purposes and present the average P, R and F for the remaining 18 sets in Table 1 . There were 210 generated references in the 18 testing sets. The table also shows the popular BLEU (Papineni et al., 2002) and NIST 2 MT metrics. We also provide two baselines -most frequent initial reference to the person in the input (Base1) and a randomly selected initial reference to the person (Base2). As ",
"cite_spans": [
{
"start": 233,
"end": 256,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.6"
},
{
"text": ", on which we do well. This is also reflected in the high scores on BLEU and NIST.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 \u00aa \u00a6 '",
"sec_num": null
},
{
"text": "It is instructive to see how these numbers vary as the amount of redundancy increases. Information theory tells us that information should be more recoverable with greater redundancy. Figure 1 plots f-measure against the minimum amount of redundancy. In other words, the value at X=3 gives the f-measure averaged over all people who were mentioned at least thrice in the input. Thus X=1 includes all examples and is the same as Table 1. As the graphs show, the quality of the generated reference improves appreciably when there are at least 5 references to the person in the input. This is a convenient result for summarization because people who are mentioned more frequently in the input are more likely to be mentioned in the summary. KEY -Generated ---Base1 --------Base2 Figure 1 : Improvement in F-measure for n-grams in output with increased redundancy in input.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 192,
"text": "Figure 1",
"ref_id": null
},
{
"start": 428,
"end": 436,
"text": "Table 1.",
"ref_id": "TABREF1"
},
{
"start": 776,
"end": 784,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u00a9 \u00aa \u00a6 '",
"sec_num": null
},
{
"text": "Our approach performs noise reduction and generates a reference from information extracted from the machine translations. Information about a person can be obtained in other ways; for example, from a database, or by collecting references to the person from extraneous English-language reports. There are two drawbacks to using extraneous sources:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages over using extraneous sources",
"sec_num": "2.7"
},
{
"text": "1. People usually have multiple possible roles and affiliations, so descriptions obtained from an external source might not be appropriate in the current context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages over using extraneous sources",
"sec_num": "2.7"
},
{
"text": "2. Selecting descriptions from external sources can change perspective -one country's terrorist is another country's freedom fighter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages over using extraneous sources",
"sec_num": "2.7"
},
{
"text": "In contrast, our approach generates references that are appropriate and reflect the perspectives expressed in the source.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages over using extraneous sources",
"sec_num": "2.7"
},
{
"text": "In the previous section, we showed how accurate references to people can be generated using an information theoretic approach. While this is an important result in itself for multilingual summarization, the same approach can be extended to correct errors in noun phrases that do not refer to people. This extension is trickier to implement, however, because: 2. Generating: The semantics for an arbitrary noun phrase cannot be defined sufficiently for formal generation; hence our approach is to select the most plausible of the coreferring NPs according to an inferred language model. When sufficient redundancy exists, it is likely that there is at least one option that is superior to most.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arbitrary noun phrases",
"sec_num": "3"
},
{
"text": "Interestingly, the nature of multi-document summarization allows us to perform these two hard tasks. We follow the same theoretical framework (identify redundancy, and then generate from this), but the techniques we use are necessarily different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arbitrary noun phrases",
"sec_num": "3"
},
{
"text": "We used the BLAST algorithm (Altschul et al., 1997) for aligning noun phrases between two translations of the same Arabic sentence. We obtained the best results when each translation was analyzed for noun chunks, and the alignment operation was performed over sequences of words and \u00ab NP\u00ac and \u00ab /NP\u00ac tags. BLAST is an efficient alignment algorithm that assumes that words in the two sentences are roughly in the same order from a global perspective. As neither of the MT systems used performs much clause or phrase reorganization, this assumption is not a problem for our task. An example of two aligned sentences is shown in figure 2. We then extract coreferring noun phrases by selecting the text between aligned \u00ab NP\u00ac and \u00ab /NP\u00ac tags; for example: ",
"cite_spans": [
{
"start": 28,
"end": 51,
"text": "(Altschul et al., 1997)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment of NPs across translations",
"sec_num": "3.1"
},
{
"text": "This task integrates well with the clustering approach to multi-document summarization (Barzilay, 2003) , where sentences in the input documents are first clustered according to their similarity, and then one sentence is generated from each cluster. This clustering approach basically does at the level of sentences what we are attempting at the level of noun phrases. After clustering, all sentences within a cluster should represent similar information. Thus, similar noun phrases in sentences within a cluster are likely to refer to the same entities. We do noun phrase coreference by identifying lexically similar noun phrases within a cluster. We use SimFinder (Hatzivassiloglou et al., 1999) for sentence clustering and the f-measure for word overlap to compare noun phrases. We set a threshold for deciding coreference by experimenting on the 6 development sets (cf. \u00a2 2 .6)-the most accurate coreference occurred with a threshold of f=0.6 and a constraint that the two noun phrases must have at least 2 words in common that were neither determiners nor prepositions. For the reference to the UN Special Commission in figure 2, we obtained the following choices from alignments and coreference across translations and documents within a sentence cluster: Larger sentence clusters represent information that is repeated more often across input documents; hence the size of a cluster is indicative of the importance of that information, and the summary is composed by considering each sentence cluster in decreasing order of size and generating one sentence from it. From our perspective of fixing errors in noun phrases, there is likely to be more redundancy in a large cluster; hence this approach is likely to work better within clusters that are important for generating the summary.",
"cite_spans": [
{
"start": 87,
"end": 103,
"text": "(Barzilay, 2003)",
"ref_id": "BIBREF1"
},
{
"start": 666,
"end": 697,
"text": "(Hatzivassiloglou et al., 1999)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment of NPs across documents",
"sec_num": "3.2"
},
{
"text": "As mentioned earlier, formal generation from a set of coreferring noun phrases is impractical due to the unrestricted nature of the underlying semantics. We thus focus on selecting the best of the possible options -the option with the least garbled word order; for example, selecting 1) from the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of noun phrases",
"sec_num": "3.3"
},
{
"text": "1. the malicious campaigns in some Western media 2. the campaigns tendentious in some of the media Western European",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of noun phrases",
"sec_num": "3.3"
},
{
"text": "The basic insight that we utilize is -when two words in a NP occur together in the original documents more often than they should by chance, it is likely they really should occur together in the generated NP. Our approach therefore consists of identifying collocations of length two. Let the number of words in the input documents be . . The natural way to determine how dependent the distributions of \u00ae and\u0101 re is to calculate their mutual information (Church and Hanks, 1991) :",
"cite_spans": [
{
"start": 453,
"end": 477,
"text": "(Church and Hanks, 1991)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of noun phrases",
"sec_num": "3.3"
},
{
"text": "\u00b3 d \u00ae \u00b2 ) 7 \u00a2 \u03bc t \u00b6 g \u2022 d \u00ae \u00a4 \u00b2 ) 7 # d \u00ae h \u00aa \u00b9 # \u012a 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of noun phrases",
"sec_num": "3.3"
},
{
"text": "If the occurrences of \u00ae and\u00afwere completely independent of each other, we would expect the maximum likelihood probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of noun phrases",
"sec_num": "3.3"
},
{
"text": "# d \u00ae \u00b2 ) C of the string\u00b0\u00ae \u014d 9 \u00b1 to be # d \u00ae h \u00a9 \u012a C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of noun phrases",
"sec_num": "3.3"
},
{
"text": ". Thus mutual information is zero when \u00ae and\u00afare independent, and positive otherwise. The greater the value of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of noun phrases",
"sec_num": "3.3"
},
{
"text": "\u00b3 d \u00ae \u00a4 \u00b2 ) 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of noun phrases",
"sec_num": "3.3"
},
{
"text": ", the more likely that\u00b0\u00ae \u00bb 9 \u00b1 is a collocation. Returning to our problem of selecting the best NP from a set of coreferring NPs, we compute a score for each NP (consisting of the string of words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation of noun phrases",
"sec_num": "3.3"
},
{
"text": ") by averaging the mutual information for each bigram:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00bc \u00be \u00bd c c s \u00bc \u00bf",
"sec_num": null
},
{
"text": "\u00c0 \u00c1 C \u00c2 \u00c3 j \u00bc \u00c4 \u00bd c c s \u00bc \u00bf \u00c5 AE \u00c8 \u00c7 \u00ca \u00c9 c \u00cb \u00bf \u00cd \u00cc \u00ce \u00bd \u00c9 c \u00cb \u00bd \u00b3 j \u00bc \u00c9 \u00b2 ' \u00bc \u00c9 t \u00cf \u00bd 9 \u00d0 \u00d2 \u00d1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00bc \u00be \u00bd c c s \u00bc \u00bf",
"sec_num": null
},
{
"text": "We then select the NP with the highest score. This model successfully selects the malicious campaigns in some Western media in the example above and the United nations Special Commission in charge of disarmament of Iraq's weapons of mass destruction in the example in \u00a2 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00bc \u00be \u00bd c c s \u00bc \u00bf",
"sec_num": null
},
{
"text": "Our approach to evaluation is similar to that for evaluating references to people. For each collection of coreferring NPs, we identified the corresponding model NPs from the manual translations of the input documents by using the BLAST algorithm for word alignment between the MT sentences and the corresponding manually translated sentence. We again provide two baselines -most frequent NP in the set (Base1) and a randomly selected NP from the set (Base2). The numbers in Table 2 are lower than those in Table 1 . This is because generating references to people is a more restricted problem -there is less error in MT output, and a formal generation module is employed for error reduction. In the case of arbitrary NPs, we only select between the available options. However, the information theoretic approach gives significant improvement for the arbitrary NP case as well, particularly for precision, which is an indicator of grammaticality.",
"cite_spans": [],
"ref_spans": [
{
"start": 474,
"end": 481,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 506,
"end": 513,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "3.4"
},
{
"text": "To evaluate how much impact the rewrites have on summaries, we ran our summarizer on the 18 test sets, and manually evaluated the selected sentences and their rewritten versions for accuracy and fluency. There were 118 sentences, out of which 94 had at least one modification after the rewrite process. We selected 50 of these 94 sentences at random and asked 2 human judges to rate each sentence and its rewritten form on a scale of 1-5 for accuracy and fluency 3 . We used 4 human judges, each judging 25 sentence pairs. The original and rewritten sentences were presented in random order, so judges did not know which sentences were rewritten. Fluency judgments were made before seeing the human translated sentence, and accuracy judgments were made by comparing with the human translation. The average scores before and after rewrite were % and x g respectively for fluency and % g and t \u00d5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": "3.5"
},
{
"text": "respectively for accuracy. Thus the rewrite operations increases both scores by around 0.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": "3.5"
},
{
"text": "We have demonstrated how the information redundancy in the multilingual multi-document summarization task can be used to reduce MT errors. We do not use any related English news reports for substituting text; hence our approach is not likely to change the perspectives expressed in the original Arabic news to those expressed in English news reports. Further, our approach does not perform any corrections specific to any particular MT system. Thus the techniques described in this paper will remain relevant even with future improvements in MT technology, and will be redundant only when MT is perfect. We have used the Arabic-English data from DUC'04 for this paper, but our approach is equally applicable to other language pairs. Further, our techniques integrate easily with the sentence clustering approach to multi-document summarization -sentence clustering allows us to reliably identify noun phrases that corefer across documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "4"
},
{
"text": "In this paper we have considered the case of noun phrases. In the future, we plan to consider other types of constituents, such as correcting errors in verb groups, and in the argument structure of verbs. This will result in a more generative and less ex- 3 We followed the DARPA/LDC guidelines from http:// ldc.upenn.edu/Projects/TIDES/Translation/TranAssessSpec.pdf. For fluency, the scale was 5:Flawless, 4:Good, 3:Non-native, 2:Disfluent, 1:Incomprehensible.",
"cite_spans": [
{
"start": 256,
"end": 257,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "4"
},
{
"text": "The accuracy scale for information covered (comparing with human translation) was 5:All, 4:Most, 3:Much, 2:Little, 1:None. tractive approach to summarization -indeed the case for generative approaches to summarization is more convincing when the input is noisy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "4"
},
{
"text": "http://www.cia.gov/cia/publications/factbook provides a list of countries and states, abbreviations and adjectival forms, for example United Kingdom/U.K./British/Briton and California/Ca./Californian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.nist.gov/speech/tests/mt/resources/scoring.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": ". Collecting redundancy: Common noun coreference is a hard problem, even within a single clean English text, and harder still across multiple MT texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Gapped BLAST and PSI-BLAST: a new generation of protein database search programs",
"authors": [
{
"first": "S",
"middle": [
"F"
],
"last": "Altschul",
"suffix": ""
},
{
"first": "T",
"middle": [
"L"
],
"last": "Madden",
"suffix": ""
},
{
"first": "A",
"middle": [
"A"
],
"last": "Schaffer",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "D",
"middle": [
"J"
],
"last": "Lipman",
"suffix": ""
}
],
"year": 1997,
"venue": "Nucleic Acids Research",
"volume": "17",
"issue": "25",
"pages": "3389--3402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.F. Altschul, T. L. Madden, A.A. Schaffer, J. Zhang, Z. Zhang, W. Miller, and D. J. Lipman. 1997. Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Research, 17(25):3389-3402.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Information Fusion for Multidocument Summarization: Paraphrasing and Generation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Barzilay. 2003. Information Fusion for Multidocu- ment Summarization: Paraphrasing and Generation. Ph.D. thesis, Columbia University, New York.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An algorithm that learns what's in a name",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "34",
"issue": "",
"pages": "211--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Bikel, R. Schwartz, and R. Weischedel. 1999. An al- gorithm that learns what's in a name. Machine Learn- ing, 34:211-231.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Columbia University at DUC",
"authors": [
{
"first": "S",
"middle": [],
"last": "Blair-Goldensohn",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hatzivassiologlou",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Passonneau",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Schiffman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Schlajiker",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Siddharthan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Siegelman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of DUC'04",
"volume": "",
"issue": "",
"pages": "23--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Blair-Goldensohn, D. Evans, V. Hatzivassiologlou, K. McKeown, A. Nenkova, R. Passonneau, B. Schiff- man, A. Schlajiker, A. Siddharthan, and S. Siegelman. 2004. Columbia University at DUC 2004. In Proceed- ings of DUC'04, pages 23-30, Boston, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Word association norms, mutual information and lexicography",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1991,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Church and P. Hanks. 1991. Word association norms, mutual information and lexicography. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Identifying similarities and differences across english and arabic news",
"authors": [
{
"first": "D",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of International Conference on Intelligence Analysis",
"volume": "",
"issue": "",
"pages": "23--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Evans and K. McKeown. 2005. Identifying similar- ities and differences across english and arabic news. In Proceedings of International Conference on Intelli- gence Analysis, pages 23-30, McLean, VA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "LT TTT -A flexible tokenisation tool",
"authors": [
{
"first": "C",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Matheson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mikheev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of LREC'00",
"volume": "",
"issue": "",
"pages": "1147--1154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grover, C. Matheson, A. Mikheev, and M. Moens. 2000. LT TTT -A flexible tokenisation tool. In Pro- ceedings of LREC'00, pages 1147-1154.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Detecting text similarity over short passages: exploring linguistic feature combinations via machine learning",
"authors": [
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Klavans",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Eskin",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of EMNLP'99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Hatzivassiloglou, J. Klavans, and E. Eskin. 1999. De- tecting text similarity over short passages: exploring linguistic feature combinations via machine learning. In Proceedings of EMNLP'99, MD, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automated text summarization in summarist",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Automated Text Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Hovy and Chin-Yew Lin. 1999. Automated text summarization in summarist. In I. Mani and M. May- bury, editors, Advances in Automated Text Summariza- tion, chapter 8. MIT Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic evaluation of summaries using n-gram co-occurrence statistics",
"authors": [
{
"first": "C",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL'03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Lin and E. Hovy. 2003. Automatic evaluation of sum- maries using n-gram co-occurrence statistics. In Pro- ceedings of HLT-NAACL'03, Edmonton.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Five Papers on WordNet",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Beckwith",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G.A. Miller, R. Beckwith, C.D. Fellbaum, D. Gross, and K. Miller. 1993. Five Papers on WordNet. Technical report, Princeton University, Princeton, N.J.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Evaluating content selection in summarization: The pyramid method",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL 2004: Main Proceedings",
"volume": "",
"issue": "",
"pages": "145--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ani Nenkova and Rebecca Passonneau. 2004. Evaluat- ing content selection in summarization: The pyramid method. In HLT-NAACL 2004: Main Proceedings, pages 145-152, Boston, MA, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL'02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of ACL'02.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A mathematical theory of communication",
"authors": [
{
"first": "C",
"middle": [
"E"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1948,
"venue": "Bell System Tech. Journal",
"volume": "27",
"issue": "",
"pages": "379--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. E. Shannon. 1948. A mathematical theory of commu- nication. Bell System Tech. Journal, 27:379-423.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "1. the Special Commission in charge of disarmament ofIraq's weapons of mass destruction 2. the Special Commission responsible disarmament Iraqi weapons of mass destruction"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "1. the United nations Special Commission in charge of disarmament of Iraq's weapons of mass destruction 2. the the United Nations Special Commission responsible disarmament Iraqi weapons of mass destruction 3. the Special Commission in charge of disarmament of Iraq's weapons of mass destruction 4. the Special Commission responsible disarmament Iraqi weapons of mass destruction 5. the United nations Special Commission in charge of disarmament of Iraq's weapons of mass destruction 6. the Special Commission of the United Nations responsible disarmament Iraqi weapons of mass destruction"
},
"TABREF1": {
"content": "<table><tr><td>shows,</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": ""
},
"TABREF2": {
"content": "<table><tr><td>: Evaluation of generated reference</td></tr><tr><td>is intuitive as it also uses redundancy to correct er-</td></tr><tr><td>rors, at the level of phrases rather than words. The</td></tr><tr><td>generation module outperforms both baselines, par-</td></tr><tr><td>ticularly on precision -which for unigrams gives an</td></tr><tr><td>indication of the correctness of lexical choice, and</td></tr><tr><td>for higher ngrams gives an indication of grammati-</td></tr><tr><td>cality. The unigram recall of are not losing too much information at the noise fil-x } indicates that we g</td></tr><tr><td>tering stage. Note that we expect a low approach, as we only generate particular attributes \u00a5 \u00a7 \u00a6 for our S</td></tr><tr><td>that are important for a summary. The important</td></tr><tr><td>measure is</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": ""
},
"TABREF4": {
"content": "<table><tr><td colspan=\"7\">&lt;S1&gt; &lt;NP&gt; Ivanov &lt;/NP&gt; stressed | | | | &lt;S2&gt; &lt;NP&gt; Ivanov &lt;/NP&gt; stressed however &lt;NP&gt; it &lt;/NP&gt; should &lt;NP&gt; it &lt;/NP&gt; should be to &lt;NP&gt; Baghdad &lt;/NP&gt; to resume &lt;NP&gt; work &lt;/NP&gt; with | | | | | | | | | | | | to &lt;NP&gt; Baghdad &lt;/NP&gt; reconvening &lt;NP&gt; work &lt;/NP&gt; with</td></tr><tr><td colspan=\"7\">&lt;NP&gt; the Special Commission in charge of | | | | &lt;NP&gt; the Special Commission &lt;/NP&gt; &lt;NP&gt; responsible disarmament Iraqi disarmament of Iraq's weapons of mass destruction &lt;/NP&gt; . &lt;/S1&gt; | | | | | | weapons of mass destruction &lt;/NP&gt; . &lt;/S2&gt;</td></tr><tr><td colspan=\"7\">Figure 2: pair of words \u00ae and\u00af, we use maximum likelihood to estimate the probabilities of observing the strings\u00b0\u00ae</td></tr><tr><td colspan=\"7\">\u00a2 9 \u00b1 ,\u00b0\u00ae and\u00b09\u00b1 . The observed frequency of these \u00b1 strings in the corpus divided by the corpus size</td></tr><tr><td colspan=\"7\">gives the maximum likelihood probabilities of these</td></tr><tr><td>events</td><td># d \u00ae</td><td>\u00b2 ) 7</td><td>,</td><td>d \u00ae x</td><td>and</td><td>\u012a C</td></tr><tr><td/><td/><td/><td/><td/><td/><td>For each</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Two noun chunked MT sentences (S1 and S2) with the words aligned using BLAST."
},
"TABREF5": {
"content": "<table><tr><td>UNIGRAMS Mutual information 0.615*@ 0.658 \u00d3 ' \u00d4 \u00a1 \u00a6 \u00d3 ' \u00d4 Base1 0.584 0.662 Base2 0.583 0.652</td><td>\u00a3 0.607* \u00d3 ' \u00d4 0.592 0.586</td></tr><tr><td>BIGRAMS Mutual information 0.388*@ 0.425* \u00d3 ' \u00d4 \u00a1 \u00a6 \u00d3 ' \u00d4 Base1 0.340 0.402 Base2 0.339 0.387</td><td>\u00a3 0.374*@ \u00d3 ' \u00d4 0.339 0.330</td></tr><tr><td>TRIGRAMS Mutual information 0.221*@ 0.204* \u00d3 ' \u00d4 \u00a1 \u00d3 ' \u00d4 Base1 0.177 0.184 Base2 0.181 0.171</td><td>\u00a3 0.196*@ \u00d3 ' \u00d4 0.166 0.160</td></tr><tr><td colspan=\"2\">FOURGRAMS Mutual information 0.092* \u00d3 ' \u00d4 \u00a1 \u00d3 ' \u00d4 0.090* Base1 0.078 0.080 Base2 0.065 0.066 @ Significantly better than Base1 * Significantly better than Base2 (Significance tested using unpaired t-test at 95% confidence) \u00a3 \u00d3 ' \u00d4 0.085* 0.072 0.061</td></tr><tr><td colspan=\"2\">MT Metrics Mutual information Base1 Base2 BLEU 0.276 0.206 0.184 NIST 5.886 4.979 4.680</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "below gives the average unigram, bigram, trigram and fourgram precision, recall and f-measure for the"
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Evaluation of noun phrase selection selected NPs, evaluated against the models. We excluded references to people as these were treated formally in \u00a2 2 . This left us with 961 noun phrases from the 18 test sets to evaluate.Table 2also provides the BLEU and NIST MT evaluation scores."
}
}
}
}