ACL-OCL / Base_JSON /prefixU /json /U16 /U16-1019.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U16-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:10:42.550053Z"
},
"title": "Presenting a New Dataset for the Timeline Generation Problem",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Holt",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Will",
"middle": [],
"last": "Radford",
"suffix": "",
"affiliation": {},
"email": "wradford@hugo.ai"
},
{
"first": "Hugo",
"middle": [],
"last": "Ai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ben",
"middle": [],
"last": "Hachey",
"suffix": "",
"affiliation": {},
"email": "ben.hachey@sydney.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The timeline generation task summarises an entity's biography by selecting stories representing key events from a large pool of relevant documents. This paper addresses the lack of a standard dataset and evaluative methodology for the problem. We present and make publicly available a new dataset of 18,793 news articles covering 39 entities. For each entity, we provide a gold standard timeline and a set of entityrelated articles. We propose ROUGE as an evaluation metric and validate our dataset by showing that top Google results outperform straw-man baselines.",
"pdf_parse": {
"paper_id": "U16-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "The timeline generation task summarises an entity's biography by selecting stories representing key events from a large pool of relevant documents. This paper addresses the lack of a standard dataset and evaluative methodology for the problem. We present and make publicly available a new dataset of 18,793 news articles covering 39 entities. For each entity, we provide a gold standard timeline and a set of entityrelated articles. We propose ROUGE as an evaluation metric and validate our dataset by showing that top Google results outperform straw-man baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information is more readily available in greater quantities than ever before. Timeline generation is a recent method for summarising data -taking a large pool of entity-related documents as input and selecting a small set that best describe the key events in the entity's life. There are several challenges in evaluation: (1) finding gold-standard timelines, (2) finding corpora from which to draw documents to build timelines, and (3) evaluating system timelines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Standard practice for the first challenge is to make use of existing timelines produced by news agencies (Chieu and Lee, 2004; Yan et al., 2011a) , but these are constrained by the tight editorial focus on prominent entities and depends on wellfunded news agencies. Another approach is to annotate new timelines from the web for domains of choice. Wang (2013) do this, but do not make their data available for direct comparison. Regarding the second challenge, access to the document pool used during the annotation process is also important, as any system must have a reasonable set from which to choose.",
"cite_spans": [
{
"start": 105,
"end": 126,
"text": "(Chieu and Lee, 2004;",
"ref_id": "BIBREF1"
},
{
"start": 127,
"end": 145,
"text": "Yan et al., 2011a)",
"ref_id": null
},
{
"start": 348,
"end": 359,
"text": "Wang (2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous approaches have used drawn on working in document summarisation, using ROUGE (Lin, 2004) to evaluate timeline generation (Chieu and Lee, 2004; Yan et al., 2011a; Yan et al., 2011b; Ahmed and Xing, 2012; Wang, 2013) . This is convenient as each element in a timeline can represent a story which can be equivalently described by many different documents. However, previous work has not validated the use of ROUGE for evaluating timeline generation.",
"cite_spans": [
{
"start": 86,
"end": 97,
"text": "(Lin, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 130,
"end": 151,
"text": "(Chieu and Lee, 2004;",
"ref_id": "BIBREF1"
},
{
"start": 152,
"end": 170,
"text": "Yan et al., 2011a;",
"ref_id": null
},
{
"start": 171,
"end": 189,
"text": "Yan et al., 2011b;",
"ref_id": "BIBREF6"
},
{
"start": 190,
"end": 211,
"text": "Ahmed and Xing, 2012;",
"ref_id": "BIBREF0"
},
{
"start": 212,
"end": 223,
"text": "Wang, 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present a general framework for creating a crowd-sourced datasets for evaluating timeline generation, including choosing a set of entities, deriving articles for annotation from Wikipedia, and annotating these articles to generate a gold standard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The dataset covers a broad range of entities with different levels of news-coverage or publicity. We provide gold-standard timelines for each entity, as well as a larger pool of topically-linked documents for further development. We analyse the dataset, showing some interesting artifacts of crowd-worker importance judgements and use ROUGE evaluation to verify that the crowdannotations correlate well with Google News 1 rankings. This reflects favourably on Google News, suggesting that it is a strong baseline for timeline generation. We release the dataset generated through this process in the hope that it will be useful in providing common benchmarks for future work on the timeline generation task. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We now detail how we choose entities and collect a corpus for annotating gold-standard timelines and evaluating models. We have taken care to design a general experimental protocol that can be used to generate entities from a range of domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2"
},
{
"text": "We begin by choosing a domain (politics) and two regions (The USA/Australia). We motivate this choice of domain by noting its large media interest, polarising entities and diverse range of topics. We choose several entities from each region a priori -39 in total. The rest of our entity-set is then generated through a process of bootstrapping. At each iteration, we use our current entities as seeds. For each seed-entity, we performed a Google News query. An entity name was defined as either the title of the entity's Wikipedia page or {#Firstname #Lastname} if they did not have a Wiki. We choose five articles from the first page of results. For each article we manually identify all other previously unseen entities and add them to our set. We continue this process of bootstrapping, using our newly included entities as seeds for the next iteration. Once we have a sufficient number of entities, we terminate the process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2"
},
{
"text": "This process can be viewed as bootstrap sampling, weighted by the probability an entity occurs in one of the articles we retrieve. By doing so we provide a realistic set of entities with varying levels of popularity and coverage. In Section 4 we show that this process results in a wide distribution of mentions and reference-timeline sizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2"
},
{
"text": "As well as relevant entities, we also provide a corpus of relevant documents from which we can construct their timelines. Each document in the corpus includes URL, publishing date and other metadata. These were obtained by performing Google News queries on our entities, and retrieving the resulting URLs. As our timelines should cover a wide range of time, we set the time-range on the query to 'archives'. This has the effect of returning articles from a broader period of time, mitigating the default recency bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2"
},
{
"text": "In total, there are 15,596 articles. The minimum, median and maximum number of articles per entity was 54, 464 and 985 respectively. By including this corpus with our gold-standards we aim to provide a complete dataset for the timeline generation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "2"
},
{
"text": "We present a general framework for formulating gold-standard timeline generation as an annotation task. This involves two components -using Wikipedia to generate a minimal set of sufficient links, and the formulation of the problem as an annotation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Annotation and Gold-Standards",
"sec_num": "3"
},
{
"text": "Annotation is cost-sensitive to the size of the task. As such, attempting to annotate the whole corpus of over 15,000 articles is infeasible. We propose a method to reduce the size of our task while maintaining the quality of the underlying timeline. For our article selection process, we need to fulfil the following criteria:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Article Selection",
"sec_num": "3.1"
},
{
"text": "\u2022 Coverage: Our set of articles should have good coverage. Timelines should cover a broad range of time-periods and events. As such, the dataset we derive our reference timelines from must also share this property.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Article Selection",
"sec_num": "3.1"
},
{
"text": "\u2022 Manageability: Each entity-article pair will be subject to a number of crowd-judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Article Selection",
"sec_num": "3.1"
},
{
"text": "As such, it's important to balance coverage with total data-set size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Article Selection",
"sec_num": "3.1"
},
{
"text": "\u2022 Informativeness: Ideally we desire the articles to be of a high quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Article Selection",
"sec_num": "3.1"
},
{
"text": "To meet these criteria, we scrape the external (non-Wiki) links from an entity's Wikipedia page. We motivate this decision by first noting the Wikipedia guidelines on verifiability 3 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Article Selection",
"sec_num": "3.1"
},
{
"text": "Attribute all quotations and any material challenged or likely to be challenged to a reliable, published source using an inline citation. The cited source must clearly support the material as presented in the article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Article Selection",
"sec_num": "3.1"
},
{
"text": "These standards of verifiability are not universally followed. Nevertheless, where they are we expect reasonable entity coverage and informativeness. After removing invalid URLS, we identify 3,197 articles for annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Article Selection",
"sec_num": "3.1"
},
{
"text": "We formulate timeline generation as an annotation task by reducing it to a simple classification problem. A single judgment is on the level of an entity-article pair. An annotator is given the first paragraph of, and a link to, the entity's Wikipedia page. They then follow a given link, and perform a two-stage classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crowd-task Formulation",
"sec_num": "3.2"
},
{
"text": "The annotators first determine whether a link is valid. A valid article is one that covers a single event in the target entity's life. They then indicate the importance of an article's when considering the story of the entity. There was a choice of three labels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crowd-task Formulation",
"sec_num": "3.2"
},
{
"text": "\u2022 Very important: key events which would be included in a one-page summary or brief of the entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crowd-task Formulation",
"sec_num": "3.2"
},
{
"text": "\u2022 Somewhat important: newsworthy events that might make it into a broader Biography, but not of critical relevance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crowd-task Formulation",
"sec_num": "3.2"
},
{
"text": "\u2022 Not important: events which are mundane or unimportant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crowd-task Formulation",
"sec_num": "3.2"
},
{
"text": "For our annotations, we use the CrowdFlower 4 platform. On average, three judgments by a trusted user were made per row. A trusted user was one whose annotations agreed with our expert's across a validation set (n = 48) at least 80% of the time. This was then aggregated into a classification label. In addition, CrowdFlower also provides a confidence measure on each judgment -a score of agreement, weighted by trust of the crowd-worker. Gold standard timelines comprise the articles that are judged to be both 'valid' and 'very important'. There were 2,601 'valid articles' and 217 'very important'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crowd-task Formulation",
"sec_num": "3.2"
},
{
"text": "We see that particularly prominent entities are responsible for a large portion of the articles. 'Barack Obama' and 'Donald Trump' each have over four hundred articles each. In fact, the six most prominent entities account for over half of all total articles (Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 268,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Analysis of Gold-Standards",
"sec_num": "4"
},
{
"text": "Very Important Articles The 'very important' articles make up our gold-standard timelines. The mean and median number of articles per entity is 5.56 and 2 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Gold-Standards",
"sec_num": "4"
},
{
"text": "There are some interesting properties that emerge. 'Barack Obama' and 'Donald Trump' each have around the same number of articles. The former has 14.6% articles deemed 'very important' -the latter only 1.5% (Figure 2 ). It is a given that certain entity's will be involved in more newsworthy events than others. However, to have such a large 5 discrepancy -considering to that all articles were deemed necessary to reference in an entity's Wiki -is curious. We believe the proportion of 'very important' articles for a given entity is an interesting avenue of future research.",
"cite_spans": [],
"ref_spans": [
{
"start": 207,
"end": 216,
"text": "(Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis of Gold-Standards",
"sec_num": "4"
},
{
"text": "Confidence 'Very important' articles have a mean confidence of 0.60. Only 4.6% of articles received a unanimous 1.00 confidence score (Figure 3) . However, three-quarter of the 'very important articles' had a confidence over 0.50 (Figure 4) . 'Somewhat important' articles have the highest overall confidence, with a mean value of 0.76. Over a third of these articles had a confidence score of 1.00 (Figure 3 ). This is somewhat understandable. Intuitively, 'somewhat important' is the default prior -we would expect most articles to fall in this category. ",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 144,
"text": "(Figure 3)",
"ref_id": "FIGREF2"
},
{
"start": 230,
"end": 240,
"text": "(Figure 4)",
"ref_id": "FIGREF3"
},
{
"start": 399,
"end": 408,
"text": "(Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Analysis of Gold-Standards",
"sec_num": "4"
},
{
"text": "For our evaluation pipeline, we adopt the approach of a number of papers in the field (Wang, 2013; Yan et al., 2011a; Yan et al., 2011b) in using the ROUGE metric (Lin, 2004) . ROUGE was first used in automatic summarisation evaluation. It is similar to the BLEU measure for machine translation (Papineni et al., 2002) . In terms of timeline evaluation, quality is measured by the amount of overlapping units (e.g. word n-grams) between articles in a system timeline and articles in a reference timeline. For details on how ROUGE scores are calculated, please refer to the original paper (Lin, 2004) . For our purposes, articles annotated as 'valid' and 'very important' are taken to be components of an entity's reference timeline. We use the ROUGE-F measure over unigrams and bi- ",
"cite_spans": [
{
"start": 86,
"end": 98,
"text": "(Wang, 2013;",
"ref_id": "BIBREF4"
},
{
"start": 99,
"end": 117,
"text": "Yan et al., 2011a;",
"ref_id": null
},
{
"start": 118,
"end": 136,
"text": "Yan et al., 2011b)",
"ref_id": "BIBREF6"
},
{
"start": 163,
"end": 174,
"text": "(Lin, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 295,
"end": 318,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 588,
"end": 599,
"text": "(Lin, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "In this section we use our supplemental dataset of articles generated by Google News to validate and benchmark the task. ROUGE vs. Search Rank For a given news query, an article's rank is a signal of its important and centrality. It is reasonable to expect then that the better an article's search-rank, the more likely it is to appear in an entity's timeline. This appears to be the case. For both the ROUGE-1 and -2 measures, there is a clear negative correlation between an article's average score and index ( Figure 5 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 521,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Benchmarks and System Validation",
"sec_num": "6"
},
{
"text": "Benchmarks For a given entity timeline, we include the following three benchmarks -Random (R): 15 articles are sampled from the entire corpus. Random+Linked (RL): 15 articles linked to the entity are sampled. Ordered+Linked (OL): the ROUGE-1 ROUGE-2 OL 0.290 0.051 RL 0.248 0.041 R 0.2052 0.027 Table 1 : F-Scores for Benchmark Systems Figure 6 : Benchmark performance. As expected, OL outperforms RL which in turn outperforms R for both ROUGE-1 and -2.",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 302,
"text": "Table 1",
"ref_id": null
},
{
"start": 336,
"end": 344,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Benchmarks and System Validation",
"sec_num": "6"
},
{
"text": "15 highest ranked articles for an entity are chosen. Reassuringly, we see that OL outperforms RL which outperforms R for both ROUGE-1 and ROUGE-2 scores ( Figure 6 ). OL received scores of 0.290 (ROUGE-1) and 0.051 (ROUGE-2) (Table 1). This can be taken as a strong benchmark for future timeline generation models trained and evaluated using this dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Benchmarks and System Validation",
"sec_num": "6"
},
{
"text": "In this paper we have developed, analysed and justified a new dataset for the timeline generation problem. There are several interesting avenues for future work. The most obvious is the development of new timeline-generation systems using this dataset. There are also still problems to be solved with the process of evaluating timeline models, but we hope that the framework described above allow researchers to easily generate evaluation datasets for timeline generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "https://news.google.com 2 https://github.com/xavi-ai/ tlg-dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://en.wikipedia.org/wiki/Wikipedia:Verifiability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.crowdflower.com 5 Or tremendous?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Timeline: A Dynamic Hierarchical Dirichlet Process Model for Recovering Birth/Death and Evolution of Topics in Text Stream",
"authors": [
{
"first": "Amr",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amr Ahmed and Eric P Xing. 2012. Timeline: A Dynamic Hierarchical Dirichlet Process Model for Recovering Birth/Death and Evolution of Topics in Text Stream. CoRR abs/1203.3463.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Query based event extraction along a timeline",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Leong Chieu",
"suffix": ""
},
{
"first": "Yoong Keok",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Leong Chieu and Yoong Keok Lee. 2004. Query based event extraction along a timeline. ACM, New York, New York, USA, July.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out: Proceedings of the ACL-04 Workshop",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. In Stan Szpakowicz Marie-Francine Moens, editor, Text Summarization Branches Out: Proceedings of the ACL-04 Work- shop, pages 74-81, Barcelona, Spain, July. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA, July. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Time-dependent Hierarchical Dirichlet Model for Timeline Generation",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.2244"
]
},
"num": null,
"urls": [],
"raw_text": "Tao Wang. 2013. Time-dependent Hierarchical Dirichlet Model for Timeline Generation. arXiv preprint arXiv:1312.2244.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Evolutionary timeline summarization",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jahna",
"middle": [],
"last": "Otterbacher",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2011,
"venue": "the 34th international ACM SIGIR conference",
"volume": "745",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Yan, Xiaojun Wan, Jahna Otterbacher, Liang Kong, Xiaoming Li, and Yan Zhang. 2011b. Evolutionary timeline summarization. In the 34th international ACM SIGIR conference, page 745, New York, New York, USA. ACM Press.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Number of different articles by entity. A small number of entities are responsible for a large number of articles."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Percentage of very important articles by entity."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Distribution of confidence for three different levels of article importance. Mean-values are indicated by a dashed line."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Percentage of 'very important' articles meeting a given confidence threshold."
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Google search rank for a given article vs. ROUGE score. Each point is an average across entities. The intensity of the point is a measure of confidence grams (n = 1, 2)."
}
}
}
}