ccsum commited on
Commit
be2d3c1
·
verified ·
1 Parent(s): 34ddfa8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +133 -84
README.md CHANGED
@@ -1,84 +1,133 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- dataset_info:
4
- features:
5
- - name: summary
6
- dtype: string
7
- - name: cluster_id
8
- dtype: string
9
- - name: summary_id
10
- dtype: string
11
- - name: article_id
12
- dtype: string
13
- - name: summary_title
14
- dtype: string
15
- - name: article_title
16
- dtype: string
17
- - name: summary_domain
18
- dtype: string
19
- - name: article_domain
20
- dtype: string
21
- - name: summary_maintext
22
- dtype: string
23
- - name: summary_word_count
24
- dtype: int64
25
- - name: summary_entity_count
26
- dtype: int64
27
- - name: entity_precision_constraint
28
- dtype: float64
29
- - name: entity_precision
30
- dtype: float64
31
- - name: simhash_distance
32
- dtype: int64
33
- - name: quotation_precision
34
- dtype: float64
35
- - name: title-title-similarity
36
- dtype: float32
37
- - name: summary-title-similarity
38
- dtype: float32
39
- - name: BERTScore-P (bert-large-uncased)
40
- dtype: float32
41
- - name: BERTScore-R (bert-large-uncased)
42
- dtype: float32
43
- - name: BERTScore-F1 (bert-large-uncased)
44
- dtype: float32
45
- - name: BERTScore-P (facebook/bart-large)
46
- dtype: float32
47
- - name: BERTScore-R (facebook/bart-large)
48
- dtype: float32
49
- - name: BERTScore-F1 (facebook/bart-large)
50
- dtype: float32
51
- - name: mint
52
- dtype: float64
53
- - name: lcsr
54
- dtype: float64
55
- - name: date_publish
56
- dtype: timestamp[us]
57
- - name: url
58
- dtype: string
59
- - name: abstractiveness_bin
60
- dtype: string
61
- - name: id
62
- dtype: string
63
- splits:
64
- - name: train
65
- num_bytes: 4181641242
66
- num_examples: 1349911
67
- - name: validation
68
- num_bytes: 32042449
69
- num_examples: 10000
70
- - name: test
71
- num_bytes: 32577217
72
- num_examples: 10000
73
- download_size: 1479040048
74
- dataset_size: 4246260908
75
- configs:
76
- - config_name: default
77
- data_files:
78
- - split: train
79
- path: data/train-*
80
- - split: validation
81
- path: data/validation-*
82
- - split: test
83
- path: data/test-*
84
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ dataset_info:
4
+ features:
5
+ - name: summary
6
+ dtype: string
7
+ - name: cluster_id
8
+ dtype: string
9
+ - name: summary_id
10
+ dtype: string
11
+ - name: article_id
12
+ dtype: string
13
+ - name: summary_title
14
+ dtype: string
15
+ - name: article_title
16
+ dtype: string
17
+ - name: summary_domain
18
+ dtype: string
19
+ - name: article_domain
20
+ dtype: string
21
+ - name: summary_maintext
22
+ dtype: string
23
+ - name: summary_word_count
24
+ dtype: int64
25
+ - name: summary_entity_count
26
+ dtype: int64
27
+ - name: entity_precision_constraint
28
+ dtype: float64
29
+ - name: entity_precision
30
+ dtype: float64
31
+ - name: simhash_distance
32
+ dtype: int64
33
+ - name: quotation_precision
34
+ dtype: float64
35
+ - name: title-title-similarity
36
+ dtype: float32
37
+ - name: summary-title-similarity
38
+ dtype: float32
39
+ - name: BERTScore-P (bert-large-uncased)
40
+ dtype: float32
41
+ - name: BERTScore-R (bert-large-uncased)
42
+ dtype: float32
43
+ - name: BERTScore-F1 (bert-large-uncased)
44
+ dtype: float32
45
+ - name: BERTScore-P (facebook/bart-large)
46
+ dtype: float32
47
+ - name: BERTScore-R (facebook/bart-large)
48
+ dtype: float32
49
+ - name: BERTScore-F1 (facebook/bart-large)
50
+ dtype: float32
51
+ - name: mint
52
+ dtype: float64
53
+ - name: lcsr
54
+ dtype: float64
55
+ - name: date_publish
56
+ dtype: timestamp[us]
57
+ - name: url
58
+ dtype: string
59
+ - name: abstractiveness_bin
60
+ dtype: string
61
+ - name: id
62
+ dtype: string
63
+ splits:
64
+ - name: train
65
+ num_bytes: 4181641242
66
+ num_examples: 1349911
67
+ - name: validation
68
+ num_bytes: 32042449
69
+ num_examples: 10000
70
+ - name: test
71
+ num_bytes: 32577217
72
+ num_examples: 10000
73
+ download_size: 1479040048
74
+ dataset_size: 4246260908
75
+ configs:
76
+ - config_name: default
77
+ data_files:
78
+ - split: train
79
+ path: data/train-*
80
+ - split: validation
81
+ path: data/validation-*
82
+ - split: test
83
+ path: data/test-*
84
+ ---
85
+ ## Dataset Card for CCSum [summary-only]
86
+
87
+ We release the summary, article url and meta-data of the CCSum dataset. The articles can be downloaded from CC-News.
88
+
89
+ ## Dataset Summary
90
+ CCSum is a large-scale and high-quality dataset for abstractive news summarization.
91
+ It contains 1.3 million pairs of articles and reference summaries derived from 35 million news articles from CommonCrawl News.
92
+ In creating this dataset, we cluster CommonCrawl News articles into news events from which we generate candidate article-summary pairs and apply strict filtering and a Bayesian optimization method that eliminates 99% of the candidate summaries.
93
+ The human evaluation shows the proposed dataset has higher quality-in terms of factual consistency, informativeness, and coherence-than established abstractive summarization datasets.
94
+
95
+ ## Load dataset
96
+ ```python
97
+ from datasets import load_dataset
98
+ # Load the full dataset (both abstractive and extractive)
99
+ dataset = load_dataset("ccsum/CCSum")
100
+
101
+ # abstractive subset of the dataset
102
+ dataset_abstractive = dataset.filter(lambda x: x["abstractiveness_bin"] == "high")
103
+
104
+ # extractive subset of the dataset
105
+ dataset_extractive = dataset.filter(lambda x: x["abstractiveness_bin"] == "low")
106
+ ```
107
+
108
+ ## Language
109
+ CCSum currently only supports English.
110
+
111
+ ## Main Data Fields
112
+
113
+ - `id`: a string that corresponds to the sha256 hash of the article and summary
114
+ - `article`: a string containing the body of the news article from CCNews
115
+ - `summary`: a string containing a summary for the article
116
+ - `abstractiveness_bin`: a string indicating if the abstractiveness level of the summary. `high` denotes the abstractive subset and `low` denotes the extractive subset.
117
+
118
+ ### Data Splits
119
+
120
+ The CNN/DailyMail dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for Version 3.0.0 of the dataset.
121
+
122
+ | Split | Total | Date range | Extractive | Abstractive |
123
+ |-------|-----------|------------------|------------|------------|
124
+ | Train | 1,349,911 | 1/2018 - 12/2021 | 674,939 | 674,972 |
125
+ | Val. | 10,000 | 1/2022 - 5/2022 | 4,853 | 5,147 |
126
+ | Test | 10,000 | 6/2022 - 12/2022 | 5,053 | 4,947 |
127
+
128
+ ## Dataset Creation
129
+ The dataset is created from CommonCrawl News. Please refer to our paper for more details: "CCSum: A Large-Scale and High-Quality Dataset for Abstractive News Summarization (NAACL 2024)."
130
+
131
+ ### Licensing Information
132
+
133
+ The CCSum dataset released under the cc-by-nc-4.0 license.