ACL-OCL / Base_JSON /prefixH /json /H01 /H01-1011.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H01-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:31:33.777892Z"
},
"title": "Automatic Title Generation for Spoken Broadcast News",
"authors": [
{
"first": "Rong",
"middle": [],
"last": "Jin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technology Institute Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213, 412-268-7003",
"region": "PA"
}
},
"email": ""
},
{
"first": "Alexander",
"middle": [
"G"
],
"last": "Hauptmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"addrLine": "412-268-1448",
"postCode": "15213",
"region": "PA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we implemented a set of title generation methods using training set of 21190 news stories and evaluated them on an independent test corpus of 1006 broadcast news documents, comparing the results over manual transcription to the results over automatically recognized speech. We use both F1 and the average number of correct title words in the correct order as metric. Overall, the results show that title generation for speech recognized news documents is possible at a level approaching the accuracy of titles generated for perfect text transcriptions.",
"pdf_parse": {
"paper_id": "H01-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we implemented a set of title generation methods using training set of 21190 news stories and evaluated them on an independent test corpus of 1006 broadcast news documents, comparing the results over manual transcription to the results over automatically recognized speech. We use both F1 and the average number of correct title words in the correct order as metric. Overall, the results show that title generation for speech recognized news documents is possible at a level approaching the accuracy of titles generated for perfect text transcriptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "To create a title for a document is a complex task. To generate a title for a spoken document becomes even more challenging because we have to deal with word errors generated by speech recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Historically, the title generation task is strongly connected to traditional summarization because it can be thought of extremely short summarization. Traditional summarization has emphasized the extractive approach, using selected sentences or paragraphs from the document to provide a summary. The weaknesses of this approach are inability of taking advantage of the training corpus and producing summarization with small ratio. Thus, it will not be suitable for title generation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "More recently, some researchers have moved toward \"learning approaches\" that take advantage of training data. Witbrock and Mittal [1] have used Na\u00efve Bayesian approach for learning the document word and title word correlation. However they limited their statistics to the case that the document word and the title word are same surface string. Hauptmann and Jin [2] extended this approach by relaxing the restriction. Treating title generation problem as a variant of Machine translation problem, Kennedy and Hauptmann [3] tried the iterative Expectation-Maximization algorithm. To avoid struggling with organizing selected title words into human readable sentence, Hauptmann [2] used K nearest neighbour method for generating titles. In this paper, we put all those methods together and compare their performance over 1000 speech recognition documents.",
"cite_spans": [
{
"start": 130,
"end": 133,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 362,
"end": 365,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 519,
"end": 522,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 676,
"end": 679,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "We decompose the title generation problem into two parts: learning and analysis from the training corpus and generating a sequence of title words to form the title.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "For learning and analysis of training corpus, we present five different learning methods for comparison: Na\u00efve Bayesian approach with limited vocabulary, Na\u00efve Bayesian approach with full vocabulary, K nearest neighbors, Iterative Expectation-Maximization approach, Term frequency and inverse document frequency method. More details of each approach will be presented in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "For the generating part, we decompose the issues involved as follows: choosing appropriate title words, deciding how many title words are appropriate for this document title, and finding the correct sequence of title words that forms a readable title 'sentence'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "The outline of this paper is as follows: Section 1 gave an introduction to the title generation problem. The details of the experiment and analysis of results are presented in Section 2. Section 3 discusses our conclusions drawn from the experiment and suggests possible improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "In this section we describe the experiment and present the results. Section 2.1 describes the data. Section 2.2 discusses the evaluation method. Section 2.3 gives a detailed description of all the methods, which were compared. Results and analysis are presented in section 2.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE CONTRASTIVE TITLE GENERATION EXPERIMENT",
"sec_num": "2."
},
{
"text": "In our experiment, the training set, consisting of 21190 perfectly transcribed documents, are obtain from CNN web site during 1999. Included with each training document text was a human assigned title. The test set, consisting of 1006 CNN TV news story documents for the same year (1999), are randomly selected from the Informedia Digital Video Library. Each document has a closed captioned transcript, an alternative transcript generated with CMU Sphinx speech recognition system with a 64000-word broadcast news language model and a human assigned title.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "2.1"
},
{
"text": "First, we evaluate title generation by different approaches using the F1 metric. For an automatically generated title Tauto, F1 is measured against corresponding human assigned title Thuman as follows: F1 = 2\u00d7precision\u00d7recall / (precision + recall) Here, precision and recall is measured respectively as the number of identical words in Tauto and Thuman over the number of words in Tauto and the number of words in Thuman. Obviously the sequential word order of the generated title words is ignored by this metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.2"
},
{
"text": "To measure how well a generated title compared to the original human generated title in terms of word order, we also measured the number of correct title words in the hypothesis titles that were in the same order as in the reference titles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.2"
},
{
"text": "We restrict all approaches to generate only 6 title words, which is the average number of title words in the training corpus. Stop words were removed throughout the training and testing documents and also removed from the titles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.2"
},
{
"text": "The five different title generation methods are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the Compared Title Generation Approaches",
"sec_num": "2.3"
},
{
"text": "It tries to capture the correlation between the words in the document and the words in the title. For each document word DW, it counts the occurrence of title word same as DW and apply the statistics to the test documents for generating titles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Na\u00efve Bayesian approach with limited vocabulary (NBL).",
"sec_num": "1."
},
{
"text": "It relaxes the constraint in the previous approach and counts all the document-word-title-word pairs. Then this full statistics will be applied on generating titles for the test documents. 3. Term frequency and inverse document frequency approach (TF.IDF). TF is the frequency of words occurring in the document and IDF is logarithm of the total number of documents divided by the number of documents containing this word. The document words with highest TF.IDF were chosen for the title word candidates. 4. K nearest neighbor approach (KNN). This algorithm is similar to the KNN algorithm applied to topic classification. It searches the training document set for the closest related document and assign the training document title to the new document as title.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Na\u00efve Bayesian approach with full vocabulary (NBF).",
"sec_num": "2."
},
{
"text": "It views documents as written in a 'verbal' language and their titles as written a 'concise' language. It builds the translation model between the 'verbal' language and the 'concise' language from the documents and titles in the training corpus and 'translate' each testing document into title.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative Expectation-Maximization approach (EM).",
"sec_num": "5."
},
{
"text": "To generate an ordered set of candidates, equivalent to what we would expect to read from left to right, we built a statistical trigram language model using the SLM tool-kit (Clarkson, 1997) and the 40,000 titles in the training set. This language model was used to determine the most likely order of the title word candidates generated by the NBL, NBF, EM and TF.IDF methods.",
"cite_spans": [
{
"start": 174,
"end": 190,
"text": "(Clarkson, 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The sequentializing process for title word candidates",
"sec_num": "2.4"
},
{
"text": "The experiment was conducted both on the closed caption transcripts and automatic speech recognized transcripts. The F1 results and the average number of correct title word in correct order are shown in Figure 1 and ",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 215,
"text": "Figure 1 and",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "RESULTS AND OBSERVATIONS",
"sec_num": "3."
},
{
"text": "N T F I D F N B L N B F E M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS AND OBSERVATIONS",
"sec_num": "3."
},
{
"text": "original documents spoken documents NBF performs much worse than NBL. NBF performances much worse than NBL in both metrics. The difference between NBF and NBL is that NBL assumes a document word can only generate a title word with the same surface string. Though it appears that NBL loses information with this very strong assumption, the results tell us that some information can safely be ignored. In NBF, nothing distinguishes between important words and trivial words. This lets frequent, but unimportant words dominate the document-word-title-word correlation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F1",
"sec_num": null
},
{
"text": "Light learning approach TF.IDF performances considerably well compared with heavy learning approaches. Surprisingly, heavy learning approaches, NBL, NBF and EM algorithm didn't out performance the light learning approach TF.IDF. We think learning the association between document words and title words by inspecting directly the document and its title is very problematic since many words in the document don't reflect its content. The better strategy should be distilling the document first before learning the correlation between document words and title words. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F1",
"sec_num": null
},
{
"text": "From the analysis discussed in previous section, we draw the following conclusions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "4."
},
{
"text": "1. The KNN approach works well for title generation especially when overlap in content between training dataset and test collection is large.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "4."
},
{
"text": "2. The fact that NBL out performances NBF and TF.IDF out performance NBL and suggests that we need to distinguish important document words from those trivial words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "4."
}
],
"back_matter": [
{
"text": "This material is based in part on work supported by National Science Foundation under Cooperative Agreement No. IRI-9817496. Partial support for this work was provided by the National Science Foundation's National Science, Mathematics, Engineering, and Technology Education Digital Library Program under grant DUE-0085834. This work was also supported in part by the Advanced Research and Development Activity (ARDA) under contract number MDA908-00-C-0037. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or ARDA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACKNOWLEDGMENTS",
"sec_num": "5."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Ultra-Summarization: A Statistical Approach to Generating Highly Condensed Non-Extractive Summaries",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Witbrock",
"suffix": ""
},
{
"first": "Vibhu",
"middle": [],
"last": "Mittal",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of SIGIR 99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Witbrock and Vibhu Mittal. Ultra-Summarization: A Statistical Approach to Generating Highly Condensed Non-Extractive Summaries. Proceedings of SIGIR 99, Berkeley, CA, August 1999.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Title Generation for Spoken Broadcast News using a Training Corpus",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "A",
"middle": [
"G"
],
"last": "Hauptmann",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of 6th Internal Conference on Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Jin and A.G. Hauptmann. Title Generation for Spoken Broadcast News using a Training Corpus. Proceedings of 6th Internal Conference on Language Processing (ICSLP 2000), Beijing China. 2000.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic Title Generation for the Informedia Multimedia Digital Library",
"authors": [
{
"first": "P",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "A",
"middle": [
"G"
],
"last": "Hauptmann",
"suffix": ""
}
],
"year": 2000,
"venue": "ACM Digital Libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Kennedy and A.G. Hauptmann. Automatic Title Generation for the Informedia Multimedia Digital Library. ACM Digital Libraries, DL-2000, San Antonio Texas, May 2000.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Comparison of Title Generation Approaches on a test corpus of 1006 documents with either perfect transcript or speech recognized transcripts using the F1 score.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Comparison of Title Generation Approaches on a test corpus of 1006 documents with either perfect transcript or speech recognized transcripts using the average number of correct words in the correct order.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>Comparison of F1</td></tr><tr><td>30.00%</td></tr><tr><td>25.00%</td></tr><tr><td>20.00%</td></tr><tr><td>15.00%</td></tr><tr><td>10.00%</td></tr><tr><td>5.00%</td></tr><tr><td>0.00%</td></tr><tr><td>K N</td></tr></table>",
"text": "2 respectively. KNN works surprisingly well. KNN generates titles for a new document by choosing from the titles in the training corpus. This works fairly well because both the training set and test set come from CNN news of the same year. Compared to other methods, KNN degrades much less with speech-recognized transcripts. Meanwhile, even though KNN performance not as well as TF.IDF and NBL in terms of F1 metric, it performances best in terms of the average number of correct title words in the correct order. If consideration of human readability matters, we would expect KNN to outperform considerately all the other approaches since it is guaranteed to generate human readable title.",
"type_str": "table",
"num": null
}
}
}
}