Datasets:
Internal evaluations first
Browse files
README.md
CHANGED
|
@@ -975,27 +975,17 @@ The original JSONL files (which take ~4x fewer disk space than this HF version)
|
|
| 975 |
|
| 976 |
### Dataset Performance
|
| 977 |
|
| 978 |
-
#### External Evaluation
|
| 979 |
-
|
| 980 |
-
The HuggingFace team has [compared the utility of various multilingual corpora for training large language models in their FineWeb2 initiative](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2).
|
| 981 |
-
|
| 982 |
-
They found that the HPLT v2 datasets are next to their [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2), on par with the [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset as shown in this figure produced by HuggingFace:
|
| 983 |
-
|
| 984 |
-
<img src="https://huggingface.co/datasets/HuggingFaceFW/admin/resolve/main/multilingual_datasets_comparison.png" width="800" height="800" />
|
| 985 |
-
|
| 986 |
-
This is a massive improvement compared to the HPLT v1 datasets, as can be seen on the plot above.
|
| 987 |
-
In fact, it’s even better: if one looks at the language-specific results, it becomes clear that on
|
| 988 |
-
Arabic, Hindi, Russian, Thai and Turkish (5 out of 9 languages HuggingFace evaluated on), [HPLT v2 is on par or better than FineWeb 2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#comparison-with-other-datasets).
|
| 989 |
-
The average score is lower mostly because of Chinese, so we have some work ahead for this language!
|
| 990 |
-
Note that the source of the FineWeb 2 (and CulturaX) data is exclusively CommonCrawl, while the HPLT datasets are to a large extent composed of Internet Archive crawls.
|
| 991 |
-
Thus, **FineWeb-2 and HPLT v2 are complementary to each other and should be used together**.
|
| 992 |
-
|
| 993 |
#### Internal Evaluation
|
| 994 |
|
| 995 |
We conducted the FineWeb-style ablation studies within the HPLT project with the focus on one high-resource and one low-resource language: English and Norwegian.
|
| 996 |
|
| 997 |
-
We train 1.7B decoder-only LMs using 100B/30B tokens sampled from the English/Norwegian parts of our HPLT v2 dataset respectively.
|
|
|
|
|
|
|
| 998 |
|
|
|
|
|
|
|
|
|
|
| 999 |
|
| 1000 |
**English**
|
| 1001 |
|
|
@@ -1003,14 +993,7 @@ We train 1.7B decoder-only LMs using 100B/30B tokens sampled from the English/No
|
|
| 1003 |
* Pretraining framework and infrastructure: We trained our English models using Megatron-LM on LUMI with 16 nodes, each with 4 AMD MI250x GPUs with dual-GCD (graphics compute die) design, amounting to 8 logical devices. In total, we used 128 devices and a single 64-core CPU for approximately 84 hours, totalling 11,008 GPU hours per model.
|
| 1004 |
* Evaluation tasks: [ARC (Easy and Challenge)](https://huggingface.co/datasets/allenai/ai2_arc), [Hellaswag](https://huggingface.co/datasets/Rowan/hellaswag), [PIQA](https://huggingface.co/datasets/ybisk/piqa), and [OpenbookQA](https://huggingface.co/datasets/allenai/openbookqa). We consider only the 0-shot evaluation regime.
|
| 1005 |
* Evaluation framework: [LightEval](https://github.com/huggingface/lighteval/tree/main).
|
| 1006 |
-
* Results:
|
| 1007 |
-
|
| 1008 |
-
|
| 1009 |
-
<details>
|
| 1010 |
-
<summary>English Results</summary>
|
| 1011 |
-
<img src="ablations_english.png" alt="English ablation studies results" width="600"/>
|
| 1012 |
-
|
| 1013 |
-
</details>
|
| 1014 |
|
| 1015 |
|
| 1016 |
**Norwegian**
|
|
@@ -1019,15 +1002,22 @@ We train 1.7B decoder-only LMs using 100B/30B tokens sampled from the English/No
|
|
| 1019 |
* Pretraining framework and infrastructure: We trained our Norwegian models using Megatron-DeepSpeed on LUMI with 32 nodes, each with 4 AMD MI250x GPUs. The full pretraining run of each model took approximately 15 hours (wall-clock time), or 1,920 GPU-hours.
|
| 1020 |
* Evaluation tasks: [NorCommonsenseQA](https://huggingface.co/datasets/ltg/norcommonsenseqa), [NorOpenBookQA](https://huggingface.co/datasets/ltg/noropenbookqa), [NRK-Quiz-QA](https://huggingface.co/datasets/ltg/nrk_quiz_qa), [NCB](https://huggingface.co/datasets/hcfa/ncb), [NorIdiom](https://huggingface.co/datasets/Sprakbanken/Norwegian_idioms), and [NorQuAD](https://huggingface.co/datasets/ltg/norquad). We discarded tasks that provided a low signal based on the monotonicity and non-random performance criteria defined in [the FineWeb-2 evaluation design](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fine-tasks). The resulting tasks were NCB, NRK-Quiz-QA, NorCommonsenseQA, and NorQuAD. We aggregated the performance using the average normalized score. We consider only the 0-shot evaluation regime.
|
| 1021 |
* Evaluation framework: [NorEval](https://github.com/ltgoslo/noreval/tree/main), a Norwegian language understanding and generation evaluation benchmark based upon LM Evaluation Harness.
|
| 1022 |
-
* Results:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1023 |
|
|
|
|
| 1024 |
|
| 1025 |
-
<
|
| 1026 |
-
<summary>Norwegian Results</summary>
|
| 1027 |
-
<img src="ablations_norwegian.jpg" alt="Norwegian ablation studies results" width="600"/>
|
| 1028 |
-
|
| 1029 |
-
</details>
|
| 1030 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1031 |
|
| 1032 |
|
| 1033 |
### Languages
|
|
|
|
| 975 |
|
| 976 |
### Dataset Performance
|
| 977 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 978 |
#### Internal Evaluation
|
| 979 |
|
| 980 |
We conducted the FineWeb-style ablation studies within the HPLT project with the focus on one high-resource and one low-resource language: English and Norwegian.
|
| 981 |
|
| 982 |
+
We train 1.7B decoder-only LMs using 100B/30B tokens sampled from the English/Norwegian parts of our HPLT v2 dataset respectively.
|
| 983 |
+
We replicate the FineWeb corpora comparison design and train the models with a fixed pretraining setup except for the pretraining corpus (English: four corpora; Norwegian: five corpora).
|
| 984 |
+
Please find the general description of the training and evalutaion setups below and refer to more details in Section 6.2 and Appendix I [in our paper](https://arxiv.org/abs/2503.10267).
|
| 985 |
|
| 986 |
+
English Results | Norwegian Results
|
| 987 |
+
:-------------------------:|:-------------------------:
|
| 988 |
+
 | 
|
| 989 |
|
| 990 |
**English**
|
| 991 |
|
|
|
|
| 993 |
* Pretraining framework and infrastructure: We trained our English models using Megatron-LM on LUMI with 16 nodes, each with 4 AMD MI250x GPUs with dual-GCD (graphics compute die) design, amounting to 8 logical devices. In total, we used 128 devices and a single 64-core CPU for approximately 84 hours, totalling 11,008 GPU hours per model.
|
| 994 |
* Evaluation tasks: [ARC (Easy and Challenge)](https://huggingface.co/datasets/allenai/ai2_arc), [Hellaswag](https://huggingface.co/datasets/Rowan/hellaswag), [PIQA](https://huggingface.co/datasets/ybisk/piqa), and [OpenbookQA](https://huggingface.co/datasets/allenai/openbookqa). We consider only the 0-shot evaluation regime.
|
| 995 |
* Evaluation framework: [LightEval](https://github.com/huggingface/lighteval/tree/main).
|
| 996 |
+
* Results: See the plot above. Our models trained on the HPLT v2 datasets reach similar performance to the models trained on FineWeb data and considerably outperform the models trained on HPLT v1.2.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 997 |
|
| 998 |
|
| 999 |
**Norwegian**
|
|
|
|
| 1002 |
* Pretraining framework and infrastructure: We trained our Norwegian models using Megatron-DeepSpeed on LUMI with 32 nodes, each with 4 AMD MI250x GPUs. The full pretraining run of each model took approximately 15 hours (wall-clock time), or 1,920 GPU-hours.
|
| 1003 |
* Evaluation tasks: [NorCommonsenseQA](https://huggingface.co/datasets/ltg/norcommonsenseqa), [NorOpenBookQA](https://huggingface.co/datasets/ltg/noropenbookqa), [NRK-Quiz-QA](https://huggingface.co/datasets/ltg/nrk_quiz_qa), [NCB](https://huggingface.co/datasets/hcfa/ncb), [NorIdiom](https://huggingface.co/datasets/Sprakbanken/Norwegian_idioms), and [NorQuAD](https://huggingface.co/datasets/ltg/norquad). We discarded tasks that provided a low signal based on the monotonicity and non-random performance criteria defined in [the FineWeb-2 evaluation design](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fine-tasks). The resulting tasks were NCB, NRK-Quiz-QA, NorCommonsenseQA, and NorQuAD. We aggregated the performance using the average normalized score. We consider only the 0-shot evaluation regime.
|
| 1004 |
* Evaluation framework: [NorEval](https://github.com/ltgoslo/noreval/tree/main), a Norwegian language understanding and generation evaluation benchmark based upon LM Evaluation Harness.
|
| 1005 |
+
* Results: See the plot above. The Norwegian models trained on FineWeb, CulturaX, and mC4 perform on par with HPLT v2 and outperform those trained on HPLT v1.2. Performance gains start to level off after 16B tokens, with the FineWeb and HPLT v2 scores being more stable during pretraining. This suggests that CulturaX, FineWeb, and HPLT v2 are more effective corpora for Norwegian, and their mixtures potentially provide further benefits.
|
| 1006 |
+
|
| 1007 |
+
#### External Evaluation
|
| 1008 |
+
|
| 1009 |
+
The HuggingFace team has [compared the utility of various multilingual corpora for training large language models in their FineWeb2 initiative](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2).
|
| 1010 |
|
| 1011 |
+
They found that the HPLT v2 datasets are next to their [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2), on par with the [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset as shown in this figure produced by HuggingFace:
|
| 1012 |
|
| 1013 |
+
<img src="https://huggingface.co/datasets/HuggingFaceFW/admin/resolve/main/multilingual_datasets_comparison.png" width="800" height="800" />
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1014 |
|
| 1015 |
+
This is a massive improvement compared to the HPLT v1 datasets, as can be seen on the plot above.
|
| 1016 |
+
In fact, it’s even better: if one looks at the language-specific results, it becomes clear that on
|
| 1017 |
+
Arabic, Hindi, Russian, Thai and Turkish (5 out of 9 languages HuggingFace evaluated on), [HPLT v2 is on par or better than FineWeb 2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#comparison-with-other-datasets).
|
| 1018 |
+
The average score is lower mostly because of Chinese, we expect it to improve a lot in HPLT v3.
|
| 1019 |
+
Note that the source of the FineWeb 2 (and CulturaX) data is exclusively CommonCrawl, while the HPLT datasets are to a large extent composed of Internet Archive crawls.
|
| 1020 |
+
Thus, **FineWeb-2 and HPLT v2 are complementary to each other and should be used together**.
|
| 1021 |
|
| 1022 |
|
| 1023 |
### Languages
|