Datasets:
specified corpora in the card
Browse files
README.md
CHANGED
|
@@ -999,7 +999,7 @@ We train 1.7B decoder-only LMs using 100B/30B tokens sampled from the English/No
|
|
| 999 |
|
| 1000 |
**English**
|
| 1001 |
|
| 1002 |
-
* Corpora: [HPLT v1.2](https://hplt-project.org/datasets/v1.2)
|
| 1003 |
* Pretraining framework and infrastructure: We trained our English models using Megatron-LM on LUMI with 16 nodes, each with 4 AMD MI250x GPUs with dual-GCD (graphics compute die) design, amounting to 8 logical devices. In total, we used 128 devices and a single 64-core CPU for approximately 84 hours, totalling 11,008 GPU hours per model.
|
| 1004 |
* Evaluation tasks: [ARC (Easy and Challenge)](https://huggingface.co/datasets/allenai/ai2_arc), [Hellaswag](https://huggingface.co/datasets/Rowan/hellaswag), [PIQA](https://huggingface.co/datasets/ybisk/piqa), and [OpenbookQA](https://huggingface.co/datasets/allenai/openbookqa). We consider only the 0-shot evaluation regime.
|
| 1005 |
* Evaluation framework: [LightEval](https://github.com/huggingface/lighteval/tree/main).
|
|
@@ -1015,7 +1015,7 @@ We train 1.7B decoder-only LMs using 100B/30B tokens sampled from the English/No
|
|
| 1015 |
|
| 1016 |
**Norwegian**
|
| 1017 |
|
| 1018 |
-
* Corpora: [HPLT v1.2](https://hplt-project.org/datasets/v1.2), [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2), [mC4](https://huggingface.co/datasets/allenai/c4),
|
| 1019 |
* Pretraining framework and infrastructure: We trained our Norwegian models using Megatron-DeepSpeed on LUMI with 32 nodes, each with 4 AMD MI250x GPUs. The full pretraining run of each model took approximately 15 hours (wall-clock time), or 1,920 GPU-hours.
|
| 1020 |
* Evaluation tasks: [NorCommonsenseQA](https://huggingface.co/datasets/ltg/norcommonsenseqa), [NorOpenBookQA](https://huggingface.co/datasets/ltg/noropenbookqa), [NRK-Quiz-QA](https://huggingface.co/datasets/ltg/nrk_quiz_qa), [NCB](https://huggingface.co/datasets/hcfa/ncb), [NorIdiom](https://huggingface.co/datasets/Sprakbanken/Norwegian_idioms), and [NorQuAD](https://huggingface.co/datasets/ltg/norquad). We discarded tasks that provided a low signal based on the monotonicity and non-random performance criteria defined in [the FineWeb-2 evaluation design](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fine-tasks). The resulting tasks were NCB, NRK-Quiz-QA, NorCommonsenseQA, and NorQuAD. We aggregated the performance using the average normalized score. We consider only the 0-shot evaluation regime.
|
| 1021 |
* Evaluation framework: [NorEval](https://github.com/ltgoslo/noreval/tree/main), a Norwegian language understanding and generation evaluation benchmark based upon LM Evaluation Harness.
|
|
|
|
| 999 |
|
| 1000 |
**English**
|
| 1001 |
|
| 1002 |
+
* Corpora: [HPLT v1.2](https://hplt-project.org/datasets/v1.2), [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and HPLT v2 (ours; deduplicated and cleaned versions).
|
| 1003 |
* Pretraining framework and infrastructure: We trained our English models using Megatron-LM on LUMI with 16 nodes, each with 4 AMD MI250x GPUs with dual-GCD (graphics compute die) design, amounting to 8 logical devices. In total, we used 128 devices and a single 64-core CPU for approximately 84 hours, totalling 11,008 GPU hours per model.
|
| 1004 |
* Evaluation tasks: [ARC (Easy and Challenge)](https://huggingface.co/datasets/allenai/ai2_arc), [Hellaswag](https://huggingface.co/datasets/Rowan/hellaswag), [PIQA](https://huggingface.co/datasets/ybisk/piqa), and [OpenbookQA](https://huggingface.co/datasets/allenai/openbookqa). We consider only the 0-shot evaluation regime.
|
| 1005 |
* Evaluation framework: [LightEval](https://github.com/huggingface/lighteval/tree/main).
|
|
|
|
| 1015 |
|
| 1016 |
**Norwegian**
|
| 1017 |
|
| 1018 |
+
* Corpora: [HPLT v1.2](https://hplt-project.org/datasets/v1.2), [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2), [mC4](https://huggingface.co/datasets/allenai/c4), [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX), and HPLT v2 (ours).
|
| 1019 |
* Pretraining framework and infrastructure: We trained our Norwegian models using Megatron-DeepSpeed on LUMI with 32 nodes, each with 4 AMD MI250x GPUs. The full pretraining run of each model took approximately 15 hours (wall-clock time), or 1,920 GPU-hours.
|
| 1020 |
* Evaluation tasks: [NorCommonsenseQA](https://huggingface.co/datasets/ltg/norcommonsenseqa), [NorOpenBookQA](https://huggingface.co/datasets/ltg/noropenbookqa), [NRK-Quiz-QA](https://huggingface.co/datasets/ltg/nrk_quiz_qa), [NCB](https://huggingface.co/datasets/hcfa/ncb), [NorIdiom](https://huggingface.co/datasets/Sprakbanken/Norwegian_idioms), and [NorQuAD](https://huggingface.co/datasets/ltg/norquad). We discarded tasks that provided a low signal based on the monotonicity and non-random performance criteria defined in [the FineWeb-2 evaluation design](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fine-tasks). The resulting tasks were NCB, NRK-Quiz-QA, NorCommonsenseQA, and NorQuAD. We aggregated the performance using the average normalized score. We consider only the 0-shot evaluation regime.
|
| 1021 |
* Evaluation framework: [NorEval](https://github.com/ltgoslo/noreval/tree/main), a Norwegian language understanding and generation evaluation benchmark based upon LM Evaluation Harness.
|