File size: 3,325 Bytes
518c388
 
 
4c27964
518c388
b3eef23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7460ca6
b3eef23
7460ca6
b3eef23
7460ca6
b3eef23
 
 
 
7460ca6
 
d044579
7460ca6
b3eef23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7460ca6
 
 
 
 
518c388
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
tags:
- pretraining
- web
---
# Common Crawl WET Dataset - c2

This repository contains a large-scale filtered dataset derived from the WET files of the Common Crawl project. The data is cleaned and aggregated to facilitate large-scale natural language processing tasks, especially the pretraining of large language models (LLMs).

## Dataset Description

- **Source:** Common Crawl CC-MAIN-2025-38, September 2025 crawl.
- **Data Type:** Extracted plaintext from web crawl WET files with aggressive metadata and boilerplate filtering.
- **File Size:** Large combined files (~15GB each) to balance upload size and storage constraints.
- **Preprocessing:** Streamed extraction, metadata removal, filtering out boilerplate and duplicate content.
- **Purpose:** Primarily designed for pretraining foundation models and LLMs requiring diverse, massive-scale natural language corpora.

## Features

- **Optimized for Pretraining:**  
  The dataset is curated and filtered to be suitable for training large language models. It contains clean, high-quality textual data ideal for unsupervised pretraining tasks like masked language modeling or autoregressive modeling.

- **Large Scale:**  
  Contains processed data amounting to multiple terabytes, allowing training on a broad, diverse text corpus representing a wide range of domains.

- **Streaming Processing:**  
  The data was processed in a memory-efficient, streaming manner to support large-scale data handling without requiring excessive resources.

- **Metadata Cleaning:**  
  Extensive removal of WARC, HTTP headers, and other metadata ensures minimal noise in the text used for training.

- **Resume and Verify:**  
  Processing is checkpointed for fault tolerance. Uploaded files are verified on Hugging Face to avoid duplicates.

- **Immediate Uploads:**  
  Files are uploaded to Hugging Face immediately after hitting the 15GB size limit to respect limited storage constraints.


## 💻 Usage

Load the dataset easily using the `datasets` library:

```python
from datasets import load_dataset

dataset = load_dataset("blue-blue/c2")

# Example: Access the first sample
print(dataset["train"][0])
```

After loading, you can iterate over text samples for pretraining models like GPT, BERT, or other large language architectures.

## Pretraining Applications

- **Foundation Model Development:**  
  Provides diverse, large-scale text data crucial for training high-quality foundation LLMs.

- **Language Modeling Tasks:**  
  Suitable for autoregressive or masked language model pretraining due to extensive scale and quality.

- **Downstream Adaptation:**  
  Can be combined with other specialized datasets for fine-tuning or adaptation tasks.

- **Research & Benchmarking:**  
  Acts as a standard large-scale corpus for benchmarking NLP algorithms and analyzing language model behavior.

## Contact

For questions, support, or collaboration:

[hello@bluesminds.com](mailto:hello@bluesminds.com)

---

Thank you for exploring the **c2** dataset — a foundational resource for large-scale language modeling and NLP research.

## ⚠️ Note

This dataset is in **update mode** — it is **continuously expanding and improving** as new Common Crawl snapshots are processed and added.  
Expect regular additions, refinements, and enhanced cleaning over time.