Datasets:
nits
Browse files
README.md
CHANGED
|
@@ -9621,7 +9621,7 @@ configs:
|
|
| 9621 |
> Liberating 3T of the finest tokens from PDFs
|
| 9622 |
|
| 9623 |
## What is this?
|
| 9624 |
-
As we
|
| 9625 |
|
| 9626 |
📄 **FinePDFs** is exactly that. It is the largest publicly available corpus sourced exclusively from PDFs, containing about **3 trillion tokens** across **475 million documents** in **1733 languages**.
|
| 9627 |
|
|
@@ -9631,14 +9631,14 @@ Compared to HTML datasets, despite being only mildly filtered, it achieves resul
|
|
| 9631 |
|
| 9632 |
The data was sourced from 105 [CommonCrawl](https://commoncrawl.org/) snapshots, spanning the _summer of 2013 to February 2025_, as well as refetched from the internet, and processed using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/), our large scale data processing library. This carefully deduplicated and filtered dataset comprises roughly **20 terabytes** of 3T tokens. For PII and opt-out see [_Personal and Sensitive Information and opt-out_](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#personal-and-sensitive-information-and-opt-out).
|
| 9633 |
|
| 9634 |
-
As is
|
| 9635 |
-
You will be able
|
| 9636 |
## Languages and available subsets
|
| 9637 |
Each language is identified by its [ISO 639-3 code](https://iso639-3.sil.org/code_tables/639/data), and the data is grouped by language-script pairs, since some languages have content in multiple scripts.
|
| 9638 |
|
| 9639 |
In total, we provide data for **1733 language-script pairs**. Of these, **978** have more than 1M tokens, and **66** have more than 1B tokens of data. Most languages also include a small `test` split which should not be trained on.
|
| 9640 |
|
| 9641 |
-
Additionally,
|
| 9642 |
|
| 9643 |
The following table shows the size of the filtering subset for the biggest 50 languages.
|
| 9644 |
|
|
@@ -9993,7 +9993,7 @@ Finally, PDFs are just one of many document types available on the web. Looking
|
|
| 9993 |
```
|
| 9994 |
@misc{kydlicek2025finepdfs,
|
| 9995 |
title={FinePDFs},
|
| 9996 |
-
author={Hynek Kydl{\'\i}{\v{c}}ek
|
| 9997 |
year={2025},
|
| 9998 |
publisher = {Hugging Face},
|
| 9999 |
journal = {Hugging Face repository},
|
|
|
|
| 9621 |
> Liberating 3T of the finest tokens from PDFs
|
| 9622 |
|
| 9623 |
## What is this?
|
| 9624 |
+
As we run out of web pages to process, the natural question has always been: what to do next? Only a few knew about a data source that everyone avoided for ages, due to its incredible extraction cost and complexity: **PDFs**.
|
| 9625 |
|
| 9626 |
📄 **FinePDFs** is exactly that. It is the largest publicly available corpus sourced exclusively from PDFs, containing about **3 trillion tokens** across **475 million documents** in **1733 languages**.
|
| 9627 |
|
|
|
|
| 9631 |
|
| 9632 |
The data was sourced from 105 [CommonCrawl](https://commoncrawl.org/) snapshots, spanning the _summer of 2013 to February 2025_, as well as refetched from the internet, and processed using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/), our large scale data processing library. This carefully deduplicated and filtered dataset comprises roughly **20 terabytes** of 3T tokens. For PII and opt-out see [_Personal and Sensitive Information and opt-out_](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#personal-and-sensitive-information-and-opt-out).
|
| 9633 |
|
| 9634 |
+
As is tradition, the dataset is fully reproducible and released under the **ODC-By 1.0 license**.
|
| 9635 |
+
You will be able to access the reproduction code, ablation and evaluation setup in this [GitHub repository](https://github.com/huggingface/finepdfs) soon 👷.
|
| 9636 |
## Languages and available subsets
|
| 9637 |
Each language is identified by its [ISO 639-3 code](https://iso639-3.sil.org/code_tables/639/data), and the data is grouped by language-script pairs, since some languages have content in multiple scripts.
|
| 9638 |
|
| 9639 |
In total, we provide data for **1733 language-script pairs**. Of these, **978** have more than 1M tokens, and **66** have more than 1B tokens of data. Most languages also include a small `test` split which should not be trained on.
|
| 9640 |
|
| 9641 |
+
Additionally, certain documents for which we have not been able to identify the language have been marked as "unknown".
|
| 9642 |
|
| 9643 |
The following table shows the size of the filtering subset for the biggest 50 languages.
|
| 9644 |
|
|
|
|
| 9993 |
```
|
| 9994 |
@misc{kydlicek2025finepdfs,
|
| 9995 |
title={FinePDFs},
|
| 9996 |
+
author={Hynek Kydl{\'\i}{\v{c}}ek and Guilherme Penedo and Leandro von Werra},
|
| 9997 |
year={2025},
|
| 9998 |
publisher = {Hugging Face},
|
| 9999 |
journal = {Hugging Face repository},
|