ACL-OCL / Base_JSON /prefixR /json /R13 /R13-1033.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R13-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:56:34.980271Z"
},
"title": "Hierarchy Identification for Automatically Generating Table-of-Contents",
"authors": [
{
"first": "Nicolai",
"middle": [],
"last": "Erbs",
"suffix": "",
"affiliation": {
"laboratory": "Ubiquitous Knowledge Processing Lab",
"institution": "Technische Universit\u00e4t Darmstadt Iryna Gurevych \u03b1\u03b2 \u03b2 Information Center for Education German Institute for Educational Research and Educational Information Torsten Zesch \u03b3 \u03b3 Language Technology University of Duisburg-Essen",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A table-of-contents (TOC) provides a quick reference to a document's content and structure. We present the first study on identifying the hierarchical structure for automatically generating a TOC using only textual features instead of structural hints e.g. from HTML-tags. We create two new datasets to evaluate our approaches for hierarchy identification. We find that our algorithm performs on a level that is sufficient for a fully automated system. For documents without given segment titles, we extend our work by automatically generating segment titles. We make the datasets and our experimental framework publicly available in order to foster future research in TOC generation.",
"pdf_parse": {
"paper_id": "R13-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "A table-of-contents (TOC) provides a quick reference to a document's content and structure. We present the first study on identifying the hierarchical structure for automatically generating a TOC using only textual features instead of structural hints e.g. from HTML-tags. We create two new datasets to evaluate our approaches for hierarchy identification. We find that our algorithm performs on a level that is sufficient for a fully automated system. For documents without given segment titles, we extend our work by automatically generating segment titles. We make the datasets and our experimental framework publicly available in order to foster future research in TOC generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A table-of-contents (TOC) provides an easy way to gain an overview about a document as a TOC presents the document's content and structure. At the same time, a TOC captures the relative importance of document topics by arranging the topic titles in a hierarchical manner. Thus, TOCs might be used as a short document summary that provides more information about search results in a search engine. Figure 1 provides a sketch of such a search interface. Instead of a thumbnail of the document like most search engines, or a clustering of search results (Carpineto et al., 2009) , we propose to use an automatically extracted TOC.",
"cite_spans": [
{
"start": 551,
"end": 575,
"text": "(Carpineto et al., 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 397,
"end": 405,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task of automatically generating a table-ofcontents can be tackled with the subtasks document segmentation, segment title generation, and hierarchy identification. The first step splits the document into topical parts, the second step generates an informative title for each segment, and the third step decides whether a segment is on a higher, equal, or lower level than the previous segment. This paper presents novel approaches for the third subtask: hierarchy identification. Additionally, it presents a detailed analysis of results for segment title generation on the presented datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many documents are already segmented but only few documents already contain an explicit hierarchical TOC (e.g. Wikipedia articles), while for most documents it needs to be automatically identified. For some documents, identification is straight-forward, e.g. if an HTML document already contains hierarchically structured headlines (<h1>, <h2>, etc). We focus on the most challenging case in which only the textual content of the documents' segments are available and the hierarchy needs to be inferred using Natural Language Processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present a framework for automatically identifying the hierarchy of two segments based on semantic and lexical features. We perform linguistic Figure 2 : TOC of this paper preprocessing including named entity recognition (Finkel et al., 2005) , keyphrase extraction (Mihalcea and Tarau, 2004) , and chunking (Schmid, 1994) which are then used as features for machine learning.",
"cite_spans": [
{
"start": 223,
"end": 244,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF11"
},
{
"start": 268,
"end": 294,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF16"
},
{
"start": 310,
"end": 324,
"text": "(Schmid, 1994)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 145,
"end": 153,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To foster future research, we present two new datasets and compare results on these datasets and the one presented by Branavan et al. (2007) .",
"cite_spans": [
{
"start": 118,
"end": 140,
"text": "Branavan et al. (2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contents",
"sec_num": null
},
{
"text": "Our research contribution is to develop new algorithms for segment hierarchy identification, to present new evaluation datasets for all subtasks, and to compare our newly developed methods with the state of the art. We also provide a comprehensive analysis of the benefits and shortcomings of the applied methods. Figure 2 gives an overview of the paper's organization (and at the same time highlights the usefulness of a TOC for the reader). Thus, we may safely skip the enumeration of paper sections and their content that usually concludes the introduction.",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 322,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contents",
"sec_num": null
},
{
"text": "For some documents, the hierarchy of segments can be induced using HTML-based features. Pembe and G\u00fcng\u00f6r (2010) focus on DOM tree and formatting features, but also use occurrences of manually crafted cue phrases such as back to top. However, most features are only applicable in very few cases where HTML markup directly provides a hierarchy. In order to provide a uniform user experience, a TOC also needs to be generated for documents where HTML-based methods fail or when only the textual content is available. Feng et al. (2005) train a classifier to detect semantically coherent areas on a page. However, they make use of the existing HTML markup and return areas of the document instead of identify-ing hierarchical structures for segments. Besides markup and position features, they use features based on unigrams and bigrams for classifying a segment into one of 12 categories.",
"cite_spans": [
{
"start": 514,
"end": 532,
"text": "Feng et al. (2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For segment title generation we divide related work into the following classes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Text-based approaches make use of only the text in the corresponding segment. Therefore, titles are limited to words appearing in the text. They can be applied in all situations, but will often create trivial or even wrong titles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Supervised approaches learn a model of which document segments usually have a certain title. They are highly precise, but require training data and are limited to an a priori determined set of titles for which the model is trained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the following, we organize the few available previous papers on this topic according to these two classes. The text-based approach by Lopez et al. (2011) uses a position heuristic. Each noun phrase in a segment is given a score depending on its position and its tf.idf value. The supervised approach by Branavan et al. (2007) trains an incremental perceptron algorithm (Collins and Roark, 2004; Daum\u00e9 and Marcu, 2005) to predict titles. It uses rules based on the hierarchical structure of the document 1 to rerank the candidates towards the best global solution. Nguyen and Shimazu (2009) expand the supervised approach by Branavan et al. (2007) using word clusters as additional features. Both approaches are trained and tested on the Cormen dataset. The book is split into a set of 39 independent documents at boundaries of segments of the second level. The newly created documents are randomly selected for training (80%) and testing (20%). Such an approach is not suited for our scenario of end-to-end TOC creation, as we want to generate a TOC for a whole document and cannot train on parts of it. Besides, this tunes the system towards special characteristics of the book instead of having a domain-independent system. Keyphrase extraction methods (Frank et al., 1999; Turney, 2000) may also be used for segment title generation if a reader prefers even shorter headlines. These methods can be either text-based or supervised.",
"cite_spans": [
{
"start": 137,
"end": 156,
"text": "Lopez et al. (2011)",
"ref_id": "BIBREF15"
},
{
"start": 306,
"end": 328,
"text": "Branavan et al. (2007)",
"ref_id": "BIBREF0"
},
{
"start": 372,
"end": 397,
"text": "(Collins and Roark, 2004;",
"ref_id": "BIBREF5"
},
{
"start": 398,
"end": 420,
"text": "Daum\u00e9 and Marcu, 2005)",
"ref_id": "BIBREF8"
},
{
"start": 567,
"end": 592,
"text": "Nguyen and Shimazu (2009)",
"ref_id": "BIBREF18"
},
{
"start": 627,
"end": 649,
"text": "Branavan et al. (2007)",
"ref_id": "BIBREF0"
},
{
"start": 1258,
"end": 1278,
"text": "(Frank et al., 1999;",
"ref_id": "BIBREF12"
},
{
"start": 1279,
"end": 1292,
"text": "Turney, 2000)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our system tackles the problem using a supervised classifier predicting the relation between the segments. Two segments can be on the same, higher, or lower level. Formally, the difference of a segment with level l 0 and a following segment with level l 1 is any integer n \u2208 [\u2212\u221e..\u221e] for which n= l 1 \u2212 l 0 . However, our analysis on the development data has shown that n typically is in the range of \u2208 [\u22122..2] which means that a following segment is at most 2 levels higher or lower than the previous segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "We identified the following categories of features that solely make use of the text in each segment (we refer to these features as in-document features):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "N-gram features We identify the top-500 ngrams in the collection and use them as Boolean features for each segment. The feature value is set to true if the n-gram appears, false otherwise. These features reflect reoccurring cue phrases and generic terms for fixed segments like the introduction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "Length-based We compute the number of characters (including whitespaces) for both segments and use their difference as feature value. We apply the same procedure for the number of tokens and sentences. A higherlevel segment might be shorter because it provides a summary of the following more detailed segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "We identify all named entities in each segment and return a Boolean feature if they share at least one entity. This feature is based on the assumption that two segments having the same entities are related. Two related segments are more likely on the same level or the second segment is a lower-level segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-based",
"sec_num": null
},
{
"text": "Noun chunk features All noun chunks in both segments are identified using the TreeTagger (Schmid, 1994) and then the average number of tokens for each of the segments is computed. The feature value is the difference of the average phrase length. Phrases in lowerlevel segments are longer because they are more detailed. In the example from Figure 1 , the term bubble sort algorithm is longer than the frequently occurring upper level phrase sorting algorithm.",
"cite_spans": [
{
"start": 89,
"end": 103,
"text": "(Schmid, 1994)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 340,
"end": 348,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Entity-based",
"sec_num": null
},
{
"text": "Additionally, the number of chunks that appear in both segments is divided by the number of chunks that appear in the second segment. If a term like sorting algorithm is the only shared term in both segments and the second segment contains in total ten phrases, then the noun chunk overlap is 10%. This feature is based on the assumption that lowerlevel segments mostly mention noun chunks that have been already introduced earlier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-based",
"sec_num": null
},
{
"text": "Keyphrase-based We apply the state-of-the-art keyphrase extraction approach TextRank (Mihalcea and Tarau, 2004) and identify a ranked list of keyphrases in each segment.",
"cite_spans": [
{
"start": 85,
"end": 111,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-based",
"sec_num": null
},
{
"text": "We compare the top-k (k \u2208 [1, 2, 3, 4, 5, 10, 20]) keyphrases of each segment pair and return true if at least one keyphrase appears in both segments. These features also reflect topically related segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-based",
"sec_num": null
},
{
"text": "Frequency We apply another feature set which uses a background corpus in addition to the text of the segments. We use the Google Web1T corpus (Brants and Franz, 2006) to retrieve the frequency of a term. The average frequency of the top-k (k \u2208 [5, 10]) keyphrases in a segment is calculated and the difference between two segments is the feature value. We expect lower-level segments to contain keyphrases that are less frequently used.",
"cite_spans": [
{
"start": 142,
"end": 166,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-based",
"sec_num": null
},
{
"text": "We use WEKA (Hall et al., 2009) to train the classifier and report results obtained with SVM, which performed best on the development set. 2 We evaluate all approaches by computing the accuracy as the fraction of correctly identified hierarchy relations. As a baseline, we consider all segments to be on the same level. Branavan et al. (2007) extracted a single TOC from an algorithms textbook (Cormen et al., 2001) and split it into a training and a test set. We use the complete TOC as a test set and refer to it as Cormen. As a single TOC is a shallow basis for experimental results, we create two additional datasets We create the first dataset from randomly selected featured articles in Wikipedia. They have been shown to be of high quality (Stein and Hess, 2007) and are complex enough to contain hierarchical TOCs. We create a second dataset using 55 books from the project Gutenberg. 3 We refer to these datasets as Wikipedia and Gutenberg. We annotated these datasets with the hierarchy level of each segment, ranging from 1 (top-level segment) to the lowest-level segment found in the datasets. Table 1 gives an overview of the datasets regarding the segment structure. Although the Cormen dataset consists of one book only, it contains more segments than an average document in any other dataset and thus is a valuable evaluation resource. The Wikipedia dataset contains on average the fewest tokens in each segment, in other words -the most fine-grained TOC. The Wikipedia and Gutenberg dataset cover a broad spectrum of topics while the Cormen dataset is focused on computational algorithms. Table 2 shows the distribution of levels in the datasets. The Cormen dataset has a much deeper structure compared to the other two datasets. The fraction of segments on the first level is below 1% because a single document may have only one toplevel segment and this document contains far more We focus on the pairwise classification in this paper and investigate the pairwise relation of neighboring segments. Two segments on the same level have a hierarchy relation of n=0, a segment that is one level lower has a hierarchy relation of n=1. Table 3 shows that for all datasets most of the segment pairs (neighboring segments) are on the same level. Although there are segments which are two level higher or three levels higher than the previous segment, this is the case for no more than 1% of all segment pairs. The Cormen has the highest deviation of level relation. This is due to the fact that its segments have a broad distribution of levels (see Table 2 ). Segments in the Gutenberg dataset, on the other hand, are in 80% of all cases on the same level as the previous segment. The case that the next segment is two level lower, i.e. n=2, is very unlikely. This is in line with our expectations that a writer does not skip levels when starting a lower level segment.",
"cite_spans": [
{
"start": 12,
"end": 31,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF13"
},
{
"start": 139,
"end": 140,
"text": "2",
"ref_id": null
},
{
"start": 320,
"end": 342,
"text": "Branavan et al. (2007)",
"ref_id": "BIBREF0"
},
{
"start": 394,
"end": 415,
"text": "(Cormen et al., 2001)",
"ref_id": "BIBREF6"
},
{
"start": 747,
"end": 769,
"text": "(Stein and Hess, 2007)",
"ref_id": "BIBREF21"
},
{
"start": 893,
"end": 894,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1106,
"end": 1113,
"text": "Table 1",
"ref_id": "TABREF3"
},
{
"start": 1606,
"end": 1613,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 2149,
"end": 2156,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 2560,
"end": 2567,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Entity-based",
"sec_num": null
},
{
"text": "We evaluate performance of our system using 10-fold cross-validation on previously unseen data using The Lab as experimental framework (Eckart de Castilho and Gurevych, 2011). Performance is measured in terms of accuracy and is defined as the ratio of correctly identified relations. Table 4 shows our results on each dataset. Always predicting two segments to be on the same level is a strong baseline, as this is the case for Table 4 : Accuracy of approaches for hierarchy identification. Best results of feature groups and combinations are marked bold. Cormen is focused on a single topic (algorithms) and thus containing reappearing n-grams. Noun chunk features are the best-performing group of features on the Wikipedia and Gutenberg and second best on the Cormen dataset. Entity, keyphrase, and frequency features do not improve the baseline in any of the presented datasets. Apparently, they are no good indicator for the hierarchical structure of document segments.",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 4",
"ref_id": null
},
{
"start": 428,
"end": 435,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Predicted 2 1 0 \u22121 \u22122 Actual 2 - 4 - - - 1 - 567 - - - 0 - - 2,585 - - \u22121 - - 478 - - \u22122 - - 24 - -",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Combining all features further improves results on the Cormen dataset. However, the best results are obtained by combining all besides entity and keyphrase features. On the other two datasets (Wikipedia and Gutenberg), a combination of all features decreases accuracy compared to a supervised system using only noun chunk features. The highest accuracy is obtained by using all features besides n-gram features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Based on our observation that a combination of all features performs worse than a selection of features, we analyzed the confusion matrix of the corresponding systems. Table 5 shows the confusion matrix for the best performing system from Table 4 on the Wikipedia dataset using selected features (all w/o n-gram features). The system is optimized towards accuracy and trained on unbalanced training data. This leads to a system returning either n= 1 (next level is one level lower) or n= 0 (same level). There are no cases where a lower-level segment is incorrectly classified as a higher-level segment but all cases with |n| \u2265 2 are incorrectly classified as having a level difference of one. Table 6 shows the confusion matrix for a system using all features on the same dataset as before (Wikipedia). The system also covers the case n= \u22121 (next level is one level higher), thus creating more realistic TOCs. In contrast to the previous system (see Table 5 ), some higher-level segment relations (n<0) are incorrectly classified as lower-level segment relations (n>0). Although the system using all features returns a lower precision than the one using selected features, it better captures the way writers construct documents (also having segments on a higher level than previous segments).",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 175,
"text": "Table 5",
"ref_id": "TABREF8"
},
{
"start": 239,
"end": 246,
"text": "Table 4",
"ref_id": null
},
{
"start": 694,
"end": 701,
"text": "Table 6",
"ref_id": "TABREF10"
},
{
"start": 951,
"end": 958,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Overall, results show that automatic hierarchy identification provides a TOC with a sufficient quality. To support this observation, Figure 3 shows the correct and predicted TOCs for the article about Apollo 8 from the Wikipedia dataset. The correct TOC is on the left and the predicted TOC is on the right. Section 1.3 (Mission control) was erroneously identified as being on a higher level than the previous section. The system fails to identify that both segments are about the crew (backup and mission control crew). The section Planning is correctly identified as having a higher level than the previous segment but leading to a different numbering ",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 141,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "So far, we have shown that our system is able to automatically predict a TOC for documents segment boundaries. In order to extend our system to documents that do not have titles for segments, we add a segment title generation step. News documents are very often segmented into smaller parts, but usually do not contain segment titles. 4 We decided not to reuse existing datasets from summarization or keyphrase extraction tasks, as they are only focused on one possible style of titles (i.e. summaries or keyphrases). Instead, we apply our algorithms to the previously presented datasets for hierarchy identification (see Section 3.1) and analyze their characteristics with respect to their segment titles. The percentage of titles that actually appear in the corresponding segments is lowest for the Wikipedia dataset (18%) while it is highest on the Cormen dataset (27%). In the Gutenberg dataset 23% of all titles appear in the text. The high value for the Cormen dataset is due to the specific characteristic that segment titles are repeated very often at the beginning of a segment. Figure 4 : Frequency distribution of a random sample of 607 titles on log-log-scale: it follows a power-law distribution.",
"cite_spans": [
{
"start": 335,
"end": 336,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1088,
"end": 1096,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segment Title Generation",
"sec_num": "5"
},
{
"text": "We further analyze the datasets in terms of segment counts for each title. Figure 4 shows the frequency of titles in the evaluation set on a logarithmic scale. We choose a random sample of 607 titles, which is the lowest number of titles in all three corpora, to allow a fair comparison across corpora. For all three datasets, most titles are used for few segments. For the datasets Wikipedia and Cormen some titles are used more frequently. In comparison to that, the most-frequent title of the Gutenberg dataset appears twice, only. Thus, we expect the supervised approaches to be most beneficial on the Wikipedia dataset. On the Cormen dataset we cannot apply any supervised approaches due to the lack of training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 83,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Frequency Distribution of Titles",
"sec_num": null
},
{
"text": "Text-based approaches As simple baselines, we use the first token and the first noun phrase occurring in each segment. As a more sophisticated baseline, we rank tokens according to their tf-idf scores. Additionally, we use TextRank (Mihalcea and Tarau, 2004) to rank noun phrases according to their co-occurrence frequencies.",
"cite_spans": [
{
"start": 232,
"end": 258,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "As named entities from a segment are often used as titles, we extract them using the Stanford Named Entity Tagger (Finkel et al., 2005) and take the first one as the segment title. 6",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "Supervised approaches We train a text classification model based on character 6-grams. 7 for sort is a sorting algorithm . . . 6 We also experimented using the most frequent entity but achieved lower results. 7 A previous evaluation has shown that 6-grams yield the best results for this task on all development sets. We used LingPipe: http://alias-i.com/lingpipe for classification.",
"cite_spans": [
{
"start": 127,
"end": 128,
"text": "6",
"ref_id": null
},
{
"start": 209,
"end": 210,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "each of the most frequent titles in each dataset. In Wikipedia, most articles have sections like See also, References, or External links, while books usually start with a chapter Preface. We restrict the list of title candidates to those appearing at least twice in the training data. We use a statistical model for predicting the title of a segment In contrast to previous approaches (Branavan et al., 2007; Nguyen and Shimazu, 2009; Jin and Hauptmann, 2001 ), we do not train on parts of the same document for which we want to predict titles, but rather on full documents of the same type (Wikipedia articles and books). This is an important difference, as in our usage scenario we need to generate full TOCs for previously unseen documents. On the Cormen dataset we cannot perform a trainings phase as it consists of one book.",
"cite_spans": [
{
"start": 385,
"end": 408,
"text": "(Branavan et al., 2007;",
"ref_id": "BIBREF0"
},
{
"start": 409,
"end": 434,
"text": "Nguyen and Shimazu, 2009;",
"ref_id": "BIBREF18"
},
{
"start": 435,
"end": 458,
"text": "Jin and Hauptmann, 2001",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "We evaluated all approaches using two evaluation metrics. We propose accuracy as evaluation metric. A generated title is counted as correct only if it exactly matches the correct title. Hence, methods that generate long titles by adding many important phrases are penalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "The Rouge evaluation metric is commonly used for evaluating summarization systems. It is based on n-gram overlap, where -in our case-the generated title is compared to the gold title. We use Rouge-L which is based on the longest common subsequence. This metric is frequently used in previous work for evaluating supervised approaches to generating TOCs because it considers near misses. We believe that it is not well suited for evaluating title generation, however, we use it for the sake of comparison with related work. Table 7 shows the results of title generation approaches on the three datasets. On the Cormen dataset, we compare our approaches with two state-of-the-art methods. For the newly created datasets no previous results are available.",
"cite_spans": [],
"ref_spans": [
{
"start": 523,
"end": 530,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "Using the first noun phrase returns the best titles on the Cormen dataset, which is in agreement with our observation from Section 5.1 that many segments repeat their title in the beginning. This also explains the high performance of the state-of-the-art approaches which are also taking the position and part of speech of candidates into account. Branavan et al. (2007) report about a feature for the supervised systems eliminating generic phrases without giving example of these phrases.",
"cite_spans": [
{
"start": 348,
"end": 370,
"text": "Branavan et al. (2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5.2"
},
{
"text": "Supervised text classification approach works quite well in case of the Wikipedia dataset with its frequently appearing titles. The approach does not work well on the Gutenberg dataset, as segments such as Preface treat different topics in most Gutenberg books. Consequently, the text classifier is not able to learn the specific properties of that segment. In future work, it will be necessary to adapt the classifier in order to focus on non-standard features that better grasp the function of a segment inside a document. For example, the introduction of a scientific paper always reads \"introduction-like\" while the covered topic changes from paper to paper. This is in line with research concerning topic bias (Mikros and Argiri, 2007; Brooke and Hirst, 2011) in which topicindependent features are applied.",
"cite_spans": [
{
"start": 715,
"end": 740,
"text": "(Mikros and Argiri, 2007;",
"ref_id": "BIBREF17"
},
{
"start": 741,
"end": 764,
"text": "Brooke and Hirst, 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5.2"
},
{
"text": "The overall level of performance in terms of accuracy and Rouge seems rather low. However, accuracy is only a rough estimate of the real performance, as many good titles might not be represented in the gold standard and Rouge is higher when comparing longer texts. Besides, a user might be interested in a specialized table-ofcontents, such as one consisting only of named entities. For example, in a document about US presidential elections, a TOC consisting only of the names of presidents might be more informative than one consisting of the dates of the fouryear periods. A flexible system for generating segment titles enables the user to decide on which titles are more interesting and thus increasing the user's benefit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5.2"
},
{
"text": "Combination of approaches As we have discussed, the usage of titles highly depends on the domain of the document and the expectations of the reader. We aim to overcome the limitations of single approaches by combining multiple approaches and integrating the reader's choice to improve the overall acceptance of a title generation system. It is essential that a combination reflects different styles of titles to cover most of the reader's preferences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5.2"
},
{
"text": "We combine complementary approaches based on three baseline systems (first NP, tf-idf, and named entities) and additionally the supervised approach (text classification). We expect the three text-based features to provide a stable performance, while the supervised approach may boost the performance on some datasets. As these approaches typically use an independent set of title candidates, they can potentially achieve a higher performance. Commonly used combination strategies like voting or complex strategies (Chen, 2011) can only be applied within approaches from the same class, as different classes will output different titles. Besides, it is desirable to create a diversity of candidates without ignoring titles generated by only one approach. Results in Table 7 reveals that a combination of approaches provides the highest accuracy of all approaches. We cannot compare a list of generated titles to a gold title with Rouge, thus not presenting any numbers (n/a). We utilize the benefit of accuracy allowing to compare a set of generated titles to a gold title. In a real-world setting, a user selects the best title from the list which means that only one suggestion has to match the gold standard. Although providing a larger result set increases accuracy, results are stable for all datasets.",
"cite_spans": [
{
"start": 514,
"end": 526,
"text": "(Chen, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 765,
"end": 772,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5.2"
},
{
"text": "We presented the first study on automatically identifying the hierarchical structure of a table-ofcontents for different kinds of text (articles and books from different domains). The task of segment hierarchy identification is a new task which has not been investigated for non-HTML text. We created two new evaluation datasets for this task, and used a supervised approach based on textual features and a background corpus and significantly improved results over a strong baseline. For documents with missing segment titles, generating segment titles is an interesting use case for keyphrase extraction and text classification techniques. We applied approaches from both tasks the existing and two new evaluation datasets and show that the performance of approaches is still quite low. Overall, we have shown that for most documents a TOC can be generated by detecting the hierarchical relations if the documents already contain segments with corresponding titles. In the other cases, one can use segment title generation, but additional research based on our newly created datasets will be necessary to further improve the task performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "In future work, we want to develop a prototype of our search interface and perform user acceptance tests. Furthermore, we want to continue develop better features for the task of hierarchy identification, and want to create methods for postprocessing a TOC in order to generate a coherent table-of-contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "We made the newly created evaluation datasets and our experimental framework publicly available in order to foster future research in table-ofcontents generation. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "E.g. neighboring segments must not have the same title.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We experimented with Na\u00efve Bayes and J48 but results were significantly lower.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The same collection of books was used byCsomai and Mihalcea (2006) for experiments on back-of-the-book indexing. They mostly cover the domains humanities, science, and technology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example, cnn.com uses story paragraphs. 5 For example, the segment Quicksort begins with: Quick-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at http://www.ukp.tu-darmstadt. de/data/table-of-contents-generation/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported by the Volkswagen Foundation as part of the ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generating a Table-of-Contents",
"authors": [
{
"first": "S",
"middle": [
"R K"
],
"last": "Branavan",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Deshpande",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2007,
"venue": "Annual Meeting of Association for Computational Linguistics",
"volume": "45",
"issue": "",
"pages": "544--551",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.R.K. Branavan, P. Deshpande, and R. Barzilay. 2007. Generating a Table-of-Contents. In Annual Meeting of Association for Computational Linguistics, vol- ume 45, pages 544-551.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Web 1T 5-gram Corpus version 1.1",
"authors": [
{
"first": "T",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Brants and A. Franz. 2006. Web 1T 5-gram Corpus version 1.1. Technical report, Google Inc., Philadel- phia, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Native Language Detection with 'cheap' Learner Corpora",
"authors": [
{
"first": "J",
"middle": [],
"last": "Brooke",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2011,
"venue": "Learner Corpus Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Brooke and G. Hirst. 2011. Native Language De- tection with 'cheap' Learner Corpora. In Learner Corpus Research 2011 (LCR 2011).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Survey of Web Clustering Engines",
"authors": [
{
"first": "C",
"middle": [],
"last": "Carpineto",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Osi\u0144ski",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "41",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Carpineto, S. Osi\u0144ski, G. Romano, and D. Weiss. 2009. A Survey of Web Clustering Engines. ACM Computing Surveys (CSUR), 41(3):17.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Collaborative Ranking: A Case Study on Entity Linking",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "771--781",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Chen. 2011. Collaborative Ranking: A Case Study on Entity Linking. pages 771-781.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Incremental Parsing with the Perceptron Algorithm",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "111--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins and B. Roark. 2004. Incremental Parsing with the Perceptron Algorithm. In Proceedings of the 42nd Meeting of the Association for Computa- tional Linguistics, pages 111-118.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Introduction to Algorithms",
"authors": [
{
"first": "T",
"middle": [
"H"
],
"last": "Cormen",
"suffix": ""
},
{
"first": "C",
"middle": [
"E"
],
"last": "Leiserson",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Rivest",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein. 2001. Introduction to Algorithms. The MIT press, Cambridge, MA, USA, 2nd edition.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Creating a Testbed for the Evaluation of Automatically Generated Back-of-the-book Indexes",
"authors": [
{
"first": "A",
"middle": [],
"last": "Csomai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "429--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Csomai and R. Mihalcea. 2006. Creating a Testbed for the Evaluation of Automatically Gener- ated Back-of-the-book Indexes. In Computational Linguistics and Intelligent Text Processing, pages 429-440.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning as Search Optimization: Approximate Large Margin Methods for Structured Prediction",
"authors": [
{
"first": "H",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 22nd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Daum\u00e9 and D. Marcu. 2005. Learning as Search Optimization: Approximate Large Margin Meth- ods for Structured Prediction. Proceedings of the 22nd International Conference on Machine Learn- ing, (1):169-176.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Lightweight Framework for Reproducible Parameter Sweeping in Information Retrieval",
"authors": [
{
"first": "R",
"middle": [],
"last": "Eckart De Castilho",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 workshop on Data infrastructures for supporting information retrieval evaluation, DE-SIRE '11",
"volume": "",
"issue": "",
"pages": "7--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Eckart de Castilho and I. Gurevych. 2011. A Lightweight Framework for Reproducible Parame- ter Sweeping in Information Retrieval. In Proceed- ings of the 2011 workshop on Data infrastructures for supporting information retrieval evaluation, DE- SIRE '11, pages 7-10, New York, NY, USA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Learning Approach to Discovering Web Page Semantic Structures",
"authors": [
{
"first": "J",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Haffner",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2005,
"venue": "Eighth International Conference on Document Analysis and Recognition (ICDAR'05)",
"volume": "",
"issue": "",
"pages": "1055--1059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Feng, P. Haffner, and M. Gilbert. 2005. A Learning Approach to Discovering Web Page Semantic Struc- tures. In Eighth International Conference on Doc- ument Analysis and Recognition (ICDAR'05), pages 1055-1059. Ieee.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Incorporating non-local Information into Information Extraction Systems by Gibbs Sampling",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.R. Finkel, T. Grenager, and C. Manning. 2005. In- corporating non-local Information into Information Extraction Systems by Gibbs Sampling. In Proceed- ings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 363-370.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Domain-specific Keyphrase Extraction",
"authors": [
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "G",
"middle": [
"W"
],
"last": "Paynter",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of 16th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "668--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Frank, G.W. Paynter, and I.H. Witten. 1999. Domain-specific Keyphrase Extraction. In Proceed- ings of 16th International Joint Conference on Arti- ficial Intelligence, pages 668-673.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The WEKA Data Mining Software: an Update. SIGKDD Explorations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "11",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reute- mann, and I.H. Witten. 2009. The WEKA Data Mining Software: an Update. SIGKDD Explo- rations, 11(1):10-18.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic Title Generation for Spoken Broadcast News",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "A",
"middle": [
"G"
],
"last": "Hauptmann",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the first international conference on Human language technology research",
"volume": "",
"issue": "",
"pages": "1--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Jin and A.G. Hauptmann. 2001. Automatic Title Generation for Spoken Broadcast News. In Pro- ceedings of the first international conference on Hu- man language technology research, pages 1-3. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic Titling of Articles using Position and Statistical Information",
"authors": [
{
"first": "C",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Prince",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Roche",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language",
"volume": "",
"issue": "",
"pages": "727--732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Lopez, V. Prince, and M. Roche. 2011. Automatic Titling of Articles using Position and Statistical In- formation. Proceedings of the International Con- ference on Recent Advances in Natural Language, pages 727-732.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "TextRank: Bringing Order into Text",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mihalcea and P. Tarau. 2004. TextRank: Bringing Order into Text. In Proceedings of the 2004 Con- ference on Empirical Methods in Natural Language Processing, pages 404-411.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Authorship Identifica-tion, and Near-Duplicate Detection",
"authors": [
{
"first": "G",
"middle": [],
"last": "Mikros",
"suffix": ""
},
{
"first": "E",
"middle": [
"K"
],
"last": "Argiri",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the SIGIR 2007 International Work-shop on Plagiarism Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Mikros and E.K. Argiri. 2007. Investigating Topic Influence in Authorship Attribution. In Proceedings of the SIGIR 2007 International Work-shop on Pla- giarism Analysis, Authorship Identifica-tion, and Near-Duplicate Detection, PAN 2007.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Semisupervised Approach for Generating a Table-of-Contents",
"authors": [
{
"first": "L",
"middle": [
"M"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Shimazu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International Conference RANLP-2009",
"volume": "",
"issue": "",
"pages": "312--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L.M. Nguyen and A. Shimazu. 2009. A Semi- supervised Approach for Generating a Table-of- Contents. In Proceedings of the International Con- ference RANLP-2009, number 1, pages 312-317.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Tree Learning Approach to Web Document Sectional Hierarchy Extraction",
"authors": [
{
"first": "F",
"middle": [
"C"
],
"last": "Pembe",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "G\u00fcng\u00f6r",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of 2nd International Conference on Agents and Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F.C. Pembe and T. G\u00fcng\u00f6r. 2010. A Tree Learning Ap- proach to Web Document Sectional Hierarchy Ex- traction. In Proceedings of 2nd International Con- ference on Agents and Artificial Intelligence.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Probabilistic Part-of-Speech Tagging using Decision Trees",
"authors": [
{
"first": "H",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of International Conference on new Methods in Language Processing",
"volume": "12",
"issue": "",
"pages": "44--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Schmid. 1994. Probabilistic Part-of-Speech Tag- ging using Decision Trees. In Proceedings of Inter- national Conference on new Methods in Language Processing, volume 12, pages 44-49.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Does It Matter Who Contributes? -A Study on Featured Articles in the German Wikipedia",
"authors": [
{
"first": "K",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Hess",
"suffix": ""
}
],
"year": 2007,
"venue": "HT '07: Proceedings of the eighteenth conference on Hypertext and hypermedia",
"volume": "",
"issue": "",
"pages": "171--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Stein and C. Hess. 2007. Does It Matter Who Contributes? -A Study on Featured Articles in the German Wikipedia. In HT '07: Proceedings of the eighteenth conference on Hypertext and hyperme- dia, pages 171-174, New York, NY, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning Algorithms for Keyphrase Extraction",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2000,
"venue": "Information Retrieval",
"volume": "2",
"issue": "4",
"pages": "303--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.D. Turney. 2000. Learning Algorithms for Key- phrase Extraction. Information Retrieval, 2(4):303- 336.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Search user interface showing a TOC along with the search results."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Correct and predicted TOCs of article about Apollo 8 from the Wikipedia dataset. (5 instead of 4 due to earlier errors). Not all of the remaining segment relations are correctly identified but the overall TOC still provides a quick reference of the article's content. It allows a reader to quickly decide whether the article about Apollo 8 fulfills his information need."
},
"TABREF0": {
"num": null,
"text": ".wikipedia.org/wiki/Sorting_algorith m In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and ... .smith.edu/~thiebaut/java/sort/demo. html...",
"content": "<table><tr><td/><td>1 Classification</td></tr><tr><td/><td>1.1 Stability</td></tr><tr><td/><td>2 Comparison of algorithms</td></tr><tr><td/><td>3 Summaries of popular sorting algorithms</td></tr><tr><td/><td>3.1 Bubble sort</td></tr><tr><td/><td>3.2 Selection sort</td></tr><tr><td/><td>3.3 Insertion sort</td></tr><tr><td/><td>3.4 Shell sort</td></tr><tr><td/><td>3.5 Comb sort</td></tr><tr><td>Quicksort -Bubble sort -Merge sort -</td><td>3.6 Merge sort</td></tr><tr><td>Shellsort</td><td>3.7 Heapsort</td></tr><tr><td/><td>3.8 Quicksort</td></tr><tr><td>Sorting Algorithm Animations</td><td>3.9 Counting sort</td></tr><tr><td>www.sorting-algorithms.com/ Animation, code, analysis, and discussion of 8 sorting algorithms on 4 initial conditions. Quick Sort -Insertion Sort -Quick Sort (3 Way Partition) -Shell Sort</td><td>3.10 Bucket sort 3.11 Radix sort 3.12 Distribution sort 3.13 Timsort 4 Memory usage patterns and index sorting 5 Inefficient/humorous sorts</td></tr><tr><td/><td>6 See also</td></tr><tr><td/><td>7 References</td></tr><tr><td/><td>8 External links</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF3": {
"num": null,
"text": "Characteristics of evaluation datasets.",
"content": "<table><tr><td colspan=\"6\">Showing the total number of documents (doc),</td></tr><tr><td colspan=\"6\">segments (seg) and average number of tokens in</td></tr><tr><td colspan=\"2\">each segment (\u2205 tok seg ).</td><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"3\">Hierarchy level</td><td/></tr><tr><td>Name</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr><tr><td>Cormen</td><td colspan=\"5\">.00 .02 .08 .41 .48</td></tr><tr><td colspan=\"6\">Wikipedia .07 .48 .41 .04 .00</td></tr><tr><td colspan=\"6\">Gutenberg .01 .35 .49 .12 .03</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"text": "Distribution of segments over levels of the evaluation corpora.containing real-world tables of contents, allowing us to evaluate on different domains and styles of hierarchies.",
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF5": {
"num": null,
"text": "Pairwise hierarchy relation Name n= 2 n= 1 n= 0 n= \u22121 n= \u22122",
"content": "<table><tr><td>Cormen</td><td>.00</td><td>.20</td><td>.60</td><td>.16</td><td>.03</td></tr><tr><td>Wikipedia</td><td>.00</td><td>.15</td><td>.71</td><td>.13</td><td>.01</td></tr><tr><td>Gutenberg</td><td>.00</td><td>.10</td><td>.80</td><td>.09</td><td>.01</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF6": {
"num": null,
"text": "Distribution of pairwise level difference of segments of the evaluation corpora. than 100 segments. This is a special characteristic of this book: since it is often used to quickly look up specific topics, the authors provide a very fine-grained table-of-contents. In Wikipedia, most of the segments are on the second level. Articles in Wikipedia are rather short, because according to the Wikipedia author guidelines a segment of a Wikipedia article is moved into an independent ar-",
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF8": {
"num": null,
"text": "",
"content": "<table><tr><td>: Confusion matrix for best system (all fea-</td></tr><tr><td>tures w/o n-gram features) on Wikipedia dataset.</td></tr><tr><td>Correctly identified segments are marked bold.</td></tr><tr><td>60.2% of cases in the Cormen and 79.8% of cased</td></tr><tr><td>in the Gutenberg dataset. The table shows results</td></tr><tr><td>for each of the feature groups defined in Section 3</td></tr><tr><td>numbered from (1) to (6). N-gram features per-</td></tr><tr><td>form best on the Cormen dataset while they per-</td></tr><tr><td>form worse than the baseline on the Wikipedia</td></tr><tr><td>(WP) dataset. This difference might be due to</td></tr><tr><td>the topic diversity in the Wikipedia and Cormen</td></tr><tr><td>datasets. Wikipedia covers many topics, while</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF10": {
"num": null,
"text": "",
"content": "<table><tr><td>: Confusion matrix for a system using all</td></tr><tr><td>features on Wikipedia dataset. Correctly identified</td></tr><tr><td>segments are marked bold.</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF12": {
"num": null,
"text": "Title generation results. No results for supervised text classification on the Cormen dataset are shown since no training data is available.",
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}