File size: 4,430 Bytes
be2d3c1
 
 
 
 
 
1a11bff
 
 
 
68d94b6
 
1a11bff
 
68d94b6
 
1a11bff
 
be2d3c1
 
 
 
 
 
68d94b6
be2d3c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16ca5e2
be2d3c1
 
16ca5e2
be2d3c1
 
16ca5e2
be2d3c1
16ca5e2
 
be2d3c1
 
 
 
 
 
 
 
 
 
 
 
1a11bff
 
be2d3c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
license: cc-by-nc-4.0
dataset_info:
  features:
  - name: summary
    dtype: string
  - name: url
    dtype: string
  - name: date_publish
    dtype: timestamp[us]
  - name: article_title
    dtype: string
  - name: id
    dtype: string
  - name: article_domain
    dtype: string
  - name: abstractiveness_bin
    dtype: string
  - name: cluster_id
    dtype: string
  - name: summary_id
    dtype: string
  - name: article_id
    dtype: string
  - name: summary_domain
    dtype: string
  - name: summary_word_count
    dtype: int64
  - name: summary_entity_count
    dtype: int64
  - name: entity_precision_constraint
    dtype: float64
  - name: entity_precision
    dtype: float64
  - name: simhash_distance
    dtype: int64
  - name: quotation_precision
    dtype: float64
  - name: title-title-similarity
    dtype: float32
  - name: summary-title-similarity
    dtype: float32
  - name: BERTScore-P (bert-large-uncased)
    dtype: float32
  - name: BERTScore-R (bert-large-uncased)
    dtype: float32
  - name: BERTScore-F1 (bert-large-uncased)
    dtype: float32
  - name: BERTScore-P (facebook/bart-large)
    dtype: float32
  - name: BERTScore-R (facebook/bart-large)
    dtype: float32
  - name: BERTScore-F1 (facebook/bart-large)
    dtype: float32
  - name: mint
    dtype: float64
  - name: lcsr
    dtype: float64
  splits:
  - name: train
    num_bytes: 1050376245
    num_examples: 1349911
  - name: validation
    num_bytes: 7785024
    num_examples: 10000
  - name: test
    num_bytes: 7798236
    num_examples: 10000
  download_size: 521533439
  dataset_size: 1065959505
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
---
## Dataset Card for CCSum [summary-only]

We release the meta data containing article url, title, summary (median length: 30 words), published date, id derived from sha2(maintext, 256), and other meta data associated with the CCSum dataset.
Please download the articles based on the urls, and reach out to us if you encounter any issue with using the dataset.

## Dataset Summary
CCSum is a large-scale and high-quality dataset for abstractive news summarization.
It contains 1.3 million pairs of articles and reference summaries derived from 35 million news articles from CommonCrawl News.
In creating this dataset, we cluster CommonCrawl News articles into news events from which we generate candidate article-summary pairs and apply strict filtering and a Bayesian optimization method that eliminates 99% of the candidate summaries.
The human evaluation shows the proposed dataset has higher quality-in terms of factual consistency, informativeness, and coherence-than established abstractive summarization datasets.

## Load dataset
```python
from datasets import load_dataset
# Load the full dataset (both abstractive and extractive)
dataset = load_dataset("ccsum/CCSum")

# abstractive subset of the dataset
dataset_abstractive = dataset.filter(lambda x: x["abstractiveness_bin"] == "high")

# extractive subset of the dataset
dataset_extractive = dataset.filter(lambda x: x["abstractiveness_bin"] == "low")
```

## Language
CCSum currently only supports English.

## Main Data Fields

- `id`: a string that corresponds to the sha256 hash of the article and summary
- `article`: a string containing the body of the news article from CCNews
- `summary`: a string containing a summary for the article
- `abstractiveness_bin`: a string indicating if the abstractiveness level of the summary. `high` denotes the abstractive subset and `low` denotes the extractive subset.

### Data Splits

The CNN/DailyMail dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for Version 3.0.0 of the dataset.

| Split | Total     | Date range       | Extractive | Abstractive |
|-------|-----------|------------------|------------|------------|
| Train | 1,349,911 | 1/2018 - 12/2021 |    674,939 |    674,972 |
| Val.  |    10,000 |  1/2022 - 5/2022 |      4,853 |      5,147 |
| Test  |    10,000 | 6/2022 - 12/2022 |      5,053 |      4,947 |

## Dataset Creation
The dataset is created from CommonCrawl News. Please refer to our paper for more details: "CCSum: A Large-Scale and High-Quality Dataset for Abstractive News Summarization (NAACL 2024)."

### Licensing Information

The CCSum dataset released under the cc-by-nc-4.0 license.