Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -966,19 +966,20 @@ language:
|
|
| 966 |
This is a large-scale collection of web-crawled documents in 191 world languages, produced by the [HPLT project](https://hplt-project.org/).
|
| 967 |
The source of the data is mostly [Internet Archive](https://archive.org/) with some additions from [Common Crawl](https://commoncrawl.org/).
|
| 968 |
|
| 969 |
-
For a detailed description of the dataset, please refer to https://hplt-project.org/datasets/v2.0
|
| 970 |
|
| 971 |
-
|
| 972 |
|
| 973 |
This is the ```cleaned``` variant of the HPLT Datasets v2.0 converted to the Parquet format semi-automatically when being uploaded here.
|
| 974 |
The original JSONL files (which take ~4x fewer disk space than this HF version) and the larger non-cleaned version can be found at https://hplt-project.org/datasets/v2.0.
|
| 975 |
|
| 976 |
-
|
| 977 |
|
| 978 |
-
|
| 979 |
|
| 980 |
The HuggingFace team has [compared the utility of various multilingual corpora for training large language models in their FineWeb2 initiative](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2).
|
| 981 |
-
|
|
|
|
| 982 |
|
| 983 |
<img src="https://huggingface.co/datasets/HuggingFaceFW/admin/resolve/main/multilingual_datasets_comparison.png" width="800" height="800" />
|
| 984 |
|
|
@@ -987,27 +988,49 @@ In fact, it’s even better: if one looks at the language-specific results, it b
|
|
| 987 |
Arabic, Hindi, Russian, Thai and Turkish (5 out of 9 languages HuggingFace evaluated on), [HPLT v2 is on par or better than FineWeb 2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#comparison-with-other-datasets).
|
| 988 |
The average score is lower mostly because of Chinese, so we have some work ahead for this language!
|
| 989 |
Note that the source of the FineWeb 2 (and CulturaX) data is exclusively CommonCrawl, while the HPLT datasets are to a large extent composed of Internet Archive crawls.
|
| 990 |
-
Thus, **FineWeb
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 991 |
|
| 992 |
-
***Internal Evaluation***
|
| 993 |
|
|
|
|
|
|
|
|
|
|
| 994 |
|
| 995 |
-
|
| 996 |
-
It confirmed the findings of HuggingFace in that HPLT v2 datasets are of much better quality than HPLT v1.2 data, which was released almost a year ago.
|
| 997 |
|
| 998 |
-
We replicated the FineWeb evaluation setting, training large language models with the same architecture and pretraining configuration
|
| 999 |
-
(e.g. 1.82B parameters, Llama architecture with a sequence length of 2048 tokens, GPT 2 tokenizer, and a global batch size of ~2 million tokens), with the only difference between the models being the training data.
|
| 1000 |
-
We randomly sampled approximately 100B tokens from different versions of HPLT as well as FineWeb-data and trained a separate model on each of these datasets.
|
| 1001 |
|
| 1002 |
-
|
| 1003 |
-
For evaluation, we use the HuggingFace LightEval in a zero-shot setting with the tasks ARC (Easy and Challenge), Hellaswag, PICA, and OpenbookQA.
|
| 1004 |
-
The figure shows the macro average of the acc_norm values for these evaluations.
|
| 1005 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1006 |
|
| 1007 |
-
<img src="https://huggingface.co/datasets/HPLT/HPLT2.0_cleaned/resolve/3c6ded1865c1918b899ea8634897f4f6fc5a20b6/english-comparison-datasets-by-HPLT.png" width="800" height="800" />
|
| 1008 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1009 |
|
| 1010 |
-
|
|
|
|
|
|
|
| 1011 |
|
| 1012 |
The ```cleaned``` version of HPLT Datasets v2.0 consists of subsets corresponding to 191 language codes.
|
| 1013 |
Below we provide a list of language codes. For each language code the amount of text is shown as measured in:
|
|
@@ -1209,4 +1232,17 @@ Below we provide a list of language codes. For each language code the amount of
|
|
| 1209 |
| 188 | zho_Hans | 4.24e+10 | 7.40e+10 | 2.35e+12 | 1.25e+09 | Chinese | zho | | zh | zh |
|
| 1210 |
| 189 | zho_Hant | 4.48e+09 | 9.51e+09 | 2.87e+11 | 1.57e+08 | Chinese | zho | | zh | zh |
|
| 1211 |
| 190 | zsm_Latn | 5.80e+08 | 1.15e+10 | 7.84e+10 | 1.84e+07 | Standard Malay | zsm | msa | | ms |
|
| 1212 |
-
| 191 | zul_Latn | 2.71e+06 | 4.44e+07 | 3.81e+08 | 1.14e+05 | Zulu | zul | | zu | zu |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 966 |
This is a large-scale collection of web-crawled documents in 191 world languages, produced by the [HPLT project](https://hplt-project.org/).
|
| 967 |
The source of the data is mostly [Internet Archive](https://archive.org/) with some additions from [Common Crawl](https://commoncrawl.org/).
|
| 968 |
|
| 969 |
+
For a detailed description of the dataset, please refer to [our website](https://hplt-project.org/datasets/v2.0) and [our pre-print](https://arxiv.org/abs/2503.10267).
|
| 970 |
|
| 971 |
+
## The Cleaned variant of HPLT Datasets v2.0
|
| 972 |
|
| 973 |
This is the ```cleaned``` variant of the HPLT Datasets v2.0 converted to the Parquet format semi-automatically when being uploaded here.
|
| 974 |
The original JSONL files (which take ~4x fewer disk space than this HF version) and the larger non-cleaned version can be found at https://hplt-project.org/datasets/v2.0.
|
| 975 |
|
| 976 |
+
### Dataset Performance
|
| 977 |
|
| 978 |
+
#### External Evaluation
|
| 979 |
|
| 980 |
The HuggingFace team has [compared the utility of various multilingual corpora for training large language models in their FineWeb2 initiative](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2).
|
| 981 |
+
|
| 982 |
+
They found that the HPLT v2 datasets are next to their [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2), on par with the [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset as shown in this figure produced by HuggingFace:
|
| 983 |
|
| 984 |
<img src="https://huggingface.co/datasets/HuggingFaceFW/admin/resolve/main/multilingual_datasets_comparison.png" width="800" height="800" />
|
| 985 |
|
|
|
|
| 988 |
Arabic, Hindi, Russian, Thai and Turkish (5 out of 9 languages HuggingFace evaluated on), [HPLT v2 is on par or better than FineWeb 2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#comparison-with-other-datasets).
|
| 989 |
The average score is lower mostly because of Chinese, so we have some work ahead for this language!
|
| 990 |
Note that the source of the FineWeb 2 (and CulturaX) data is exclusively CommonCrawl, while the HPLT datasets are to a large extent composed of Internet Archive crawls.
|
| 991 |
+
Thus, **FineWeb-2 and HPLT v2 are complementary to each other and should be used together**.
|
| 992 |
+
|
| 993 |
+
#### Internal Evaluation
|
| 994 |
+
|
| 995 |
+
We conducted the FineWeb-style ablation studies within the HPLT project with the focus on one high-resource and one low-resource language: English and Norwegian.
|
| 996 |
+
|
| 997 |
+
We train 1.7B decoder-only LMs using 100B/30B tokens sampled from the English/Norwegian parts of our HPLT v2 dataset respectively. We replicate the FineWeb corpora comparison design and train the models with a fixed pretraining setup except for the pretraining corpus (English: four corpora; Norwegian: five corpora). Please find the general description of the training and evalutaion setups below and refer to more details in Section 6.2 and Appendix I [in our pre-print](https://arxiv.org/abs/2503.10267).
|
| 998 |
+
|
| 999 |
+
|
| 1000 |
+
**English**
|
| 1001 |
+
|
| 1002 |
+
* Corpora: [HPLT v1.2](https://hplt-project.org/datasets/v1.2) and [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
|
| 1003 |
+
* Pretraining framework and infrastructure: We trained our English models using Megatron-LM on LUMI with 16 nodes, each with 4 AMD MI250x GPUs with dual-GCD (graphics compute die) design, amounting to 8 logical devices. In total, we used 128 devices and a single 64-core CPU for approximately 84 hours, totalling 11,008 GPU hours per model.
|
| 1004 |
+
* Evaluation tasks: [ARC (Easy and Challenge)](https://huggingface.co/datasets/allenai/ai2_arc), [Hellaswag](https://huggingface.co/datasets/Rowan/hellaswag), [PIQA](https://huggingface.co/datasets/ybisk/piqa), and [OpenbookQA](https://huggingface.co/datasets/allenai/openbookqa). We consider only the 0-shot evaluation regime.
|
| 1005 |
+
* Evaluation framework: [LightEval](https://github.com/huggingface/lighteval/tree/main).
|
| 1006 |
+
* Results: Please expand the plot below. Our models trained on the HPLT v2 datasets reach similar performance to the models trained on FineWeb data and considerably outperform the models trained on HPLT v1.2.
|
| 1007 |
|
|
|
|
| 1008 |
|
| 1009 |
+
<details>
|
| 1010 |
+
<summary>English Results</summary>
|
| 1011 |
+
<img src="ablations_english.png" alt="English ablation studies results" width="600"/>
|
| 1012 |
|
| 1013 |
+
</details>
|
|
|
|
| 1014 |
|
|
|
|
|
|
|
|
|
|
| 1015 |
|
| 1016 |
+
**Norwegian**
|
|
|
|
|
|
|
| 1017 |
|
| 1018 |
+
* Corpora: [HPLT v1.2](https://hplt-project.org/datasets/v1.2), [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2), [mC4](https://huggingface.co/datasets/allenai/c4), and [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
|
| 1019 |
+
* Pretraining framework and infrastructure: We trained our Norwegian models using Megatron-DeepSpeed on LUMI with 32 nodes, each with 4 AMD MI250x GPUs. The full pretraining run of each model took approximately 15 hours (wall-clock time), or 1,920 GPU-hours.
|
| 1020 |
+
* Evaluation tasks: [NorCommonsenseQA](https://huggingface.co/datasets/ltg/norcommonsenseqa), [NorOpenBookQA](https://huggingface.co/datasets/ltg/noropenbookqa), [NRK-Quiz-QA](https://huggingface.co/datasets/ltg/nrk_quiz_qa), [NCB](https://huggingface.co/datasets/hcfa/ncb), [NorIdiom](https://huggingface.co/datasets/Sprakbanken/Norwegian_idioms), and [NorQuAD](https://huggingface.co/datasets/ltg/norquad). We discarded tasks that provided a low signal based on the monotonicity and non-random performance criteria defined in [the FineWeb-2 evaluation design](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fine-tasks). The resulting tasks were NCB, NRK-Quiz-QA, NorCommonsenseQA, and NorQuAD. We aggregated the performance using the average normalized score. We consider only the 0-shot evaluation regime.
|
| 1021 |
+
* Evaluation framework: [NorEval](https://github.com/ltgoslo/noreval/tree/main), a Norwegian language understanding and generation evaluation benchmark based upon LM Evaluation Harness.
|
| 1022 |
+
* Results: Please expand the plot below. The Norwegian models trained on FineWeb, CulturaX, and mC4 perform on par with HPLT v2 and outperform those trained on HPLT v1.2. Performance gains start to level off after 16B tokens, with the FineWeb and HPLT v2 scores being more stable during pretraining. This suggests that CulturaX, FineWeb, and HPLT v2 are more effective corpora for Norwegian, and their mixtures potentially provide further benefits.
|
| 1023 |
|
|
|
|
| 1024 |
|
| 1025 |
+
<details>
|
| 1026 |
+
<summary>Norwegian Results</summary>
|
| 1027 |
+
<img src="ablations_norwegian.jpg" alt="Norwegian ablation studies results" width="600"/>
|
| 1028 |
+
|
| 1029 |
+
</details>
|
| 1030 |
|
| 1031 |
+
|
| 1032 |
+
|
| 1033 |
+
### Languages
|
| 1034 |
|
| 1035 |
The ```cleaned``` version of HPLT Datasets v2.0 consists of subsets corresponding to 191 language codes.
|
| 1036 |
Below we provide a list of language codes. For each language code the amount of text is shown as measured in:
|
|
|
|
| 1232 |
| 188 | zho_Hans | 4.24e+10 | 7.40e+10 | 2.35e+12 | 1.25e+09 | Chinese | zho | | zh | zh |
|
| 1233 |
| 189 | zho_Hant | 4.48e+09 | 9.51e+09 | 2.87e+11 | 1.57e+08 | Chinese | zho | | zh | zh |
|
| 1234 |
| 190 | zsm_Latn | 5.80e+08 | 1.15e+10 | 7.84e+10 | 1.84e+07 | Standard Malay | zsm | msa | | ms |
|
| 1235 |
+
| 191 | zul_Latn | 2.71e+06 | 4.44e+07 | 3.81e+08 | 1.14e+05 | Zulu | zul | | zu | zu |
|
| 1236 |
+
|
| 1237 |
+
|
| 1238 |
+
|
| 1239 |
+
### Cite us
|
| 1240 |
+
|
| 1241 |
+
```
|
| 1242 |
+
@article{burchell2025expanded,
|
| 1243 |
+
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
|
| 1244 |
+
author={Burchell, Laurie and de Gibert, Ona and Arefyev, Nikolay and Aulamo, Mikko and Ba{\~n}{\'o}n, Marta and Fedorova, Mariia and Guillou, Liane and Haddow, Barry and Haji{\v{c}}, Jan and Henriksson, Erik and others},
|
| 1245 |
+
journal={arXiv preprint arXiv:2503.10267},
|
| 1246 |
+
year={2025}
|
| 1247 |
+
}
|
| 1248 |
+
```
|