Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -2,4 +2,90 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
+
task_categories:
|
| 6 |
+
- text-generation
|
| 7 |
+
tags:
|
| 8 |
+
- llm
|
| 9 |
+
- pretraining
|
| 10 |
+
- web
|
| 11 |
+
- data-selection
|
| 12 |
+
size_categories:
|
| 13 |
+
- n>1T
|
| 14 |
+
---
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# FineWeb-Mask
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
[π DATAMASK Paper](https://arxiv.org/abs/2512.24265) | [π» GitHub Repository](https://github.com/ByteDance-Seed/DATAMASK) | [π¦ Fineweb-Mask Dataset](https://huggingface.co/datasets/DATA-MASK/FineWeb-Mask)
|
| 23 |
+
|
| 24 |
+
</div>
|
| 25 |
+
|
| 26 |
+
## π Introduction
|
| 27 |
+
|
| 28 |
+
**FineWeb-Mask** is a 1.5 trillion token, high-efficiency pre-training dataset curated using the **DATAMASK** framework. Developed by the **ByteDance Seed team**, DATAMASK addresses the fundamental tension in large-scale data selection: the trade-off between **high quality** and **high diversity**.
|
| 29 |
+
|
| 30 |
+
By modeling data selection as a **Mask Learning** problem, we provide a derivative of the original [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) corpus. FineWeb-Mask is designed to eliminate semantic redundancy while preserving the highest quality samples, allowing models to achieve superior performance with significantly less data.
|
| 31 |
+
|
| 32 |
+
## π― The Problem: The Quality-Diversity Trap
|
| 33 |
+
|
| 34 |
+
In large language model (LLM) pre-training, developers usually face two suboptimal choices:
|
| 35 |
+
|
| 36 |
+
1. **The Quality Trap:** Filtering solely by quality scores leads to "diminishing returns." Samples become highly clustered, resulting in severe semantic redundancy.
|
| 37 |
+
2. **The Diversity Trap:** Filtering solely for diversity often discards high-value quality samples, leading to worse performance than the original raw dataset.
|
| 38 |
+
3. **The Compute Bottleneck:** Traditional diversity algorithms (like greedy selection) are computationally prohibitive for trillion-token datasets.
|
| 39 |
+
|
| 40 |
+
## π‘ Highlights: The DATAMASK Framework
|
| 41 |
+
|
| 42 |
+
DATAMASK breaks this deadlock through a "joint harvesting" strategy:
|
| 43 |
+
|
| 44 |
+
* **Joint Optimization:** Uses Policy Gradient algorithms to optimize both quality and diversity metrics within a unified framework.
|
| 45 |
+
* **Extreme Acceleration:** Through probability relaxation and specialized optimization techniques, DATAMASK reduces computation time by **98.9%** compared to traditional greedy algorithms, making trillion-token selection feasible.
|
| 46 |
+
* **The "Balancer":** Includes a tunable parameter that allows developers to define the "Golden Ratio" between quality and diversity for their specific needs.
|
| 47 |
+
* **Semantic De-redundancy:** Visual analysis shows that FineWeb-Mask samples are distributed evenly across high-quality regions rather than being rigidly clustered.
|
| 48 |
+
|
| 49 |
+
## π Evaluation Results
|
| 50 |
+
|
| 51 |
+
FineWeb-Mask demonstrates that **1+1 > 2**. By selecting a subset that represents only ~10% of the original scale in specific experiments, we observed:
|
| 52 |
+
|
| 53 |
+
* **Dense Models:** A **3.2% average improvement** across 12 benchmarks for 1.5B dense models.
|
| 54 |
+
* **MoE Models:** A **1.9% improvement** for 7B Mixture-of-Experts (MoE) models.
|
| 55 |
+
* **Length Bias Correction:** While quality filters favor long text and diversity filters favor short text, DATAMASK finds a scientific middle ground.
|
| 56 |
+
|
| 57 |
+
| Model Size | Dataset | Avg. Score (12 Benchmarks) | Improvement |
|
| 58 |
+
| --- | --- | --- | --- |
|
| 59 |
+
| 1.5B Dense | FineWeb (Original) | Baseline | - |
|
| 60 |
+
| 1.5B Dense | **FineWeb-Mask** | **+3.2%** | π |
|
| 61 |
+
| 7B MoE | FineWeb (Original) | Baseline | - |
|
| 62 |
+
| 7B MoE | **FineWeb-Mask** | **+1.9%** | π |
|
| 63 |
+
|
| 64 |
+
## β€οΈ Acknowledgements
|
| 65 |
+
|
| 66 |
+
FineWeb-Mask is built upon the incredible foundational work of the [HuggingFace FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) team. We are grateful to the open-source community for providing the raw corpora that made this optimization possible.
|
| 67 |
+
|
| 68 |
+
## π Citation
|
| 69 |
+
|
| 70 |
+
If you find our dataset or the DATAMASK framework useful, please cite our work:
|
| 71 |
+
|
| 72 |
+
```bibtex
|
| 73 |
+
@misc{fan2025jointselectionlargescalepretraining,
|
| 74 |
+
title={Joint Selection for Large-Scale Pre-Training Data via Policy Gradient-based Mask Learning},
|
| 75 |
+
author={Ziqing Fan and Yuqiao Xian and Yan Sun and Li Shen},
|
| 76 |
+
year={2025},
|
| 77 |
+
eprint={2512.24265},
|
| 78 |
+
archivePrefix={arXiv},
|
| 79 |
+
primaryClass={cs.CL},
|
| 80 |
+
url={https://arxiv.org/abs/2512.24265},
|
| 81 |
+
}
|
| 82 |
+
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
## π³ License
|
| 86 |
+
|
| 87 |
+
This dataset is released under the **Apache 2.0** license. Users should also adhere to the original license terms of the FineWeb dataset and its constituent sources.
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
**Would you like me to help you draft the "How to Use" section for loading this dataset via the Hugging Face `datasets` library?**
|