Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -9632,7 +9632,9 @@ Compared to HTML datasets, despite being only mildly filtered, it achieves resul
|
|
| 9632 |
The data was sourced from 105 [CommonCrawl](https://commoncrawl.org/) snapshots, spanning the _summer of 2013 to February 2025_, as well as refetched from the internet, and processed using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/), our large scale data processing library. This carefully deduplicated and filtered dataset comprises roughly **3.65 terabytes** of 3T tokens. For PII and opt-out see [_Personal and Sensitive Information and opt-out_](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#personal-and-sensitive-information-and-opt-out).
|
| 9633 |
|
| 9634 |
As is tradition, the dataset is fully reproducible and released under the **ODC-By 1.0 license**.
|
| 9635 |
-
|
|
|
|
|
|
|
| 9636 |
## Languages and available subsets
|
| 9637 |
Each language is identified by its [ISO 639-3 code](https://iso639-3.sil.org/code_tables/639/data), and the data is grouped by language-script pairs, since some languages have content in multiple scripts.
|
| 9638 |
|
|
@@ -9841,8 +9843,6 @@ Following FineWeb-2, we apply MinHash across all dumps for each language separat
|
|
| 9841 |
### PII Anonymization🎭
|
| 9842 |
Kept unchanged, emails and ip addresses are anonymized. ✉️
|
| 9843 |
|
| 9844 |
-
We will soon release more details and the reasoning behind each step in our upcoming blogpost 👷.
|
| 9845 |
-
|
| 9846 |
## Dataset performance evaluation and ablations
|
| 9847 |
For measuring dataset performance of `eng_Latn` subset, we refined our set of tasks to the following list (especially note the addition of 2 table extraction tasks):
|
| 9848 |
- [**SQuAD 2.0**](https://huggingface.co/datasets/lighteval/squad_v2)
|
|
|
|
| 9632 |
The data was sourced from 105 [CommonCrawl](https://commoncrawl.org/) snapshots, spanning the _summer of 2013 to February 2025_, as well as refetched from the internet, and processed using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/), our large scale data processing library. This carefully deduplicated and filtered dataset comprises roughly **3.65 terabytes** of 3T tokens. For PII and opt-out see [_Personal and Sensitive Information and opt-out_](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#personal-and-sensitive-information-and-opt-out).
|
| 9633 |
|
| 9634 |
As is tradition, the dataset is fully reproducible and released under the **ODC-By 1.0 license**.
|
| 9635 |
+
- ⚒️ [Reproduction code](https://github.com/huggingface/finepdfs).
|
| 9636 |
+
- 📚 [Blogpost](https://huggingface.co/spaces/HuggingFaceFW/FinePDFsBlog)
|
| 9637 |
+
|
| 9638 |
## Languages and available subsets
|
| 9639 |
Each language is identified by its [ISO 639-3 code](https://iso639-3.sil.org/code_tables/639/data), and the data is grouped by language-script pairs, since some languages have content in multiple scripts.
|
| 9640 |
|
|
|
|
| 9843 |
### PII Anonymization🎭
|
| 9844 |
Kept unchanged, emails and ip addresses are anonymized. ✉️
|
| 9845 |
|
|
|
|
|
|
|
| 9846 |
## Dataset performance evaluation and ablations
|
| 9847 |
For measuring dataset performance of `eng_Latn` subset, we refined our set of tasks to the following list (especially note the addition of 2 table extraction tasks):
|
| 9848 |
- [**SQuAD 2.0**](https://huggingface.co/datasets/lighteval/squad_v2)
|