Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 1977184565 bytes, limit is 300000000 bytes Make sure that 1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:The task_categories "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

This dataset is the result of processing all WARC files in the CCNews Corpus, from the beginning (2016) to June of 2024. The data has been cleaned and deduplicated, and language of articles have been detected and added. The process is similar to what HuggingFace's DataTrove does.

Overall, it contains about 600 million news articles in more than 100 languages from all around the globe.

For license information, please refer to CommonCrawl's Terms of Use.

Sample Python code to explore this dataset:

from datasets import load_dataset
from tqdm import tqdm

# Load the news articles **crawled** in the year 2016 (but not necessarily published in 2016), in streaming mode
dataset = load_dataset("stanford-oval/ccnews", name="2016", streaming=True) # `name` can be one of 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024

# Print information about the dataset
print(dataset)

# Iterate over a few examples
print("\nFirst few examples:")
for i, example in enumerate(dataset["train"].take(5)):
    print(f"Example {i + 1}:")
    print(example)
    print()

# Count the number of articles (in 2016)
row_count = 0
for _ in tqdm(dataset["train"], desc="Counting rows", unit=" rows", unit_scale=True, unit_divisor=1000):
    row_count += 1

# Print the number of rows
print(f"\nTotal number of articles: {row_count}")

# Extract all Arabic (ar) articles
for row in tqdm(dataset["train"], desc="Extracting articles", unit=" rows", unit_scale=True, unit_divisor=1000):
    if row["language"] == "ar":
        print(row)
Downloads last month
110