license: cc-by-nc-4.0
dataset_info:
features:
- name: summary
dtype: string
- name: url
dtype: string
- name: date_publish
dtype: timestamp[us]
- name: article_title
dtype: string
- name: id
dtype: string
- name: article_domain
dtype: string
- name: abstractiveness_bin
dtype: string
- name: cluster_id
dtype: string
- name: summary_id
dtype: string
- name: article_id
dtype: string
- name: summary_domain
dtype: string
- name: summary_word_count
dtype: int64
- name: summary_entity_count
dtype: int64
- name: entity_precision_constraint
dtype: float64
- name: entity_precision
dtype: float64
- name: simhash_distance
dtype: int64
- name: quotation_precision
dtype: float64
- name: title-title-similarity
dtype: float32
- name: summary-title-similarity
dtype: float32
- name: BERTScore-P (bert-large-uncased)
dtype: float32
- name: BERTScore-R (bert-large-uncased)
dtype: float32
- name: BERTScore-F1 (bert-large-uncased)
dtype: float32
- name: BERTScore-P (facebook/bart-large)
dtype: float32
- name: BERTScore-R (facebook/bart-large)
dtype: float32
- name: BERTScore-F1 (facebook/bart-large)
dtype: float32
- name: mint
dtype: float64
- name: lcsr
dtype: float64
splits:
- name: train
num_bytes: 1050376245
num_examples: 1349911
- name: validation
num_bytes: 7785024
num_examples: 10000
- name: test
num_bytes: 7798236
num_examples: 10000
download_size: 521533439
dataset_size: 1065959505
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
Dataset Card for CCSum [summary-only]
We release the meta data containing article url, title, summary (median length: 30 words), published date, id derived from sha2(maintext, 256), and other meta data associated with the CCSum dataset. Please download the articles based on the urls, and reach out to us if you encounter any issue with using the dataset.
Dataset Summary
CCSum is a large-scale and high-quality dataset for abstractive news summarization. It contains 1.3 million pairs of articles and reference summaries derived from 35 million news articles from CommonCrawl News. In creating this dataset, we cluster CommonCrawl News articles into news events from which we generate candidate article-summary pairs and apply strict filtering and a Bayesian optimization method that eliminates 99% of the candidate summaries. The human evaluation shows the proposed dataset has higher quality-in terms of factual consistency, informativeness, and coherence-than established abstractive summarization datasets.
Load dataset
from datasets import load_dataset
# Load the full dataset (both abstractive and extractive)
dataset = load_dataset("ccsum/CCSum")
# abstractive subset of the dataset
dataset_abstractive = dataset.filter(lambda x: x["abstractiveness_bin"] == "high")
# extractive subset of the dataset
dataset_extractive = dataset.filter(lambda x: x["abstractiveness_bin"] == "low")
Language
CCSum currently only supports English.
Main Data Fields
id: a string that corresponds to the sha256 hash of the article and summaryarticle: a string containing the body of the news article from CCNewssummary: a string containing a summary for the articleabstractiveness_bin: a string indicating if the abstractiveness level of the summary.highdenotes the abstractive subset andlowdenotes the extractive subset.
Data Splits
The CNN/DailyMail dataset has 3 splits: train, validation, and test. Below are the statistics for Version 3.0.0 of the dataset.
| Split | Total | Date range | Extractive | Abstractive |
|---|---|---|---|---|
| Train | 1,349,911 | 1/2018 - 12/2021 | 674,939 | 674,972 |
| Val. | 10,000 | 1/2022 - 5/2022 | 4,853 | 5,147 |
| Test | 10,000 | 6/2022 - 12/2022 | 5,053 | 4,947 |
Dataset Creation
The dataset is created from CommonCrawl News. Please refer to our paper for more details: "CCSum: A Large-Scale and High-Quality Dataset for Abstractive News Summarization (NAACL 2024)."
Licensing Information
The CCSum dataset released under the cc-by-nc-4.0 license.