Create readme.md
Browse files
readme.md
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Common Crawl WET Dataset - c2
|
| 2 |
+
|
| 3 |
+
This repository contains a large-scale filtered dataset derived from the WET files of the Common Crawl project. The data is cleaned and aggregated to facilitate large-scale natural language processing tasks, especially the pretraining of large language models (LLMs).
|
| 4 |
+
|
| 5 |
+
## Dataset Description
|
| 6 |
+
|
| 7 |
+
- **Source:** Common Crawl CC-MAIN-2025-38, September 2025 crawl.
|
| 8 |
+
- **Data Type:** Extracted plaintext from web crawl WET files with aggressive metadata and boilerplate filtering.
|
| 9 |
+
- **File Size:** Large combined files (~15GB each) to balance upload size and storage constraints.
|
| 10 |
+
- **Preprocessing:** Streamed extraction, metadata removal, filtering out boilerplate and duplicate content.
|
| 11 |
+
- **Purpose:** Primarily designed for pretraining foundation models and LLMs requiring diverse, massive-scale natural language corpora.
|
| 12 |
+
|
| 13 |
+
## Features
|
| 14 |
+
|
| 15 |
+
- **Optimized for Pretraining:**
|
| 16 |
+
The dataset is curated and filtered to be suitable for training large language models. It contains clean, high-quality textual data ideal for unsupervised pretraining tasks like masked language modeling or autoregressive modeling.
|
| 17 |
+
|
| 18 |
+
- **Large Scale:**
|
| 19 |
+
Contains processed data amounting to multiple terabytes, allowing training on a broad, diverse text corpus representing a wide range of domains.
|
| 20 |
+
|
| 21 |
+
- **Streaming Processing:**
|
| 22 |
+
The data was processed in a memory-efficient, streaming manner to support large-scale data handling without requiring excessive resources.
|
| 23 |
+
|
| 24 |
+
- **Metadata Cleaning:**
|
| 25 |
+
Extensive removal of WARC, HTTP headers, and other metadata ensures minimal noise in the text used for training.
|
| 26 |
+
|
| 27 |
+
- **Resume and Verify:**
|
| 28 |
+
Processing is checkpointed for fault tolerance. Uploaded files are verified on Hugging Face to avoid duplicates.
|
| 29 |
+
|
| 30 |
+
- **Immediate Uploads:**
|
| 31 |
+
Files are uploaded to Hugging Face immediately after hitting the 15GB size limit to respect limited storage constraints.
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
## Usage
|
| 35 |
+
|
| 36 |
+
Load the dataset using Hugging Face's `datasets` library:
|
| 37 |
+
|
| 38 |
+
from datasets import load_dataset
|
| 39 |
+
|
| 40 |
+
dataset = load_dataset("blue-blue/c2")
|
| 41 |
+
|
| 42 |
+
After loading, you can iterate over text samples for pretraining models like GPT, BERT, or other large language architectures.
|
| 43 |
+
|
| 44 |
+
## Pretraining Applications
|
| 45 |
+
|
| 46 |
+
- **Foundation Model Development:**
|
| 47 |
+
Provides diverse, large-scale text data crucial for training high-quality foundation LLMs.
|
| 48 |
+
|
| 49 |
+
- **Language Modeling Tasks:**
|
| 50 |
+
Suitable for autoregressive or masked language model pretraining due to extensive scale and quality.
|
| 51 |
+
|
| 52 |
+
- **Downstream Adaptation:**
|
| 53 |
+
Can be combined with other specialized datasets for fine-tuning or adaptation tasks.
|
| 54 |
+
|
| 55 |
+
- **Research & Benchmarking:**
|
| 56 |
+
Acts as a standard large-scale corpus for benchmarking NLP algorithms and analyzing language model behavior.
|
| 57 |
+
|
| 58 |
+
## Contact
|
| 59 |
+
|
| 60 |
+
For questions, support, or collaboration:
|
| 61 |
+
|
| 62 |
+
[hello@bluesminds.com](mailto:hello@bluesminds.com)
|
| 63 |
+
|
| 64 |
+
---
|
| 65 |
+
|
| 66 |
+
Thank you for exploring the **c2** dataset — a foundational resource for large-scale language modeling and NLP research.
|