Commit ·
db16203
1
Parent(s): dbe5b8c
docs: Fix link to maester repo
Browse files
README.md
CHANGED
|
@@ -17,7 +17,6 @@ Munin-7B-open-pt is a 7-billion-parameter [open-source](https://opensource.org/a
|
|
| 17 |
Munin-7B-open-pt is a base model that can serve as a starting point for fine-tuning and post-training.
|
| 18 |
It has not been instruction-tuned and cannot directly be expected to function as a chat model.
|
| 19 |
|
| 20 |
-
|
| 21 |
| Model | Model Weights | Training Data | Training Code |
|
| 22 |
|:------|:--------------|:--------------|:--------------|
|
| 23 |
| Llama | Public with custom license | Private | Private |
|
|
@@ -25,7 +24,6 @@ It has not been instruction-tuned and cannot directly be expected to function as
|
|
| 25 |
| Apertus | Public, openly licensed | Reproducible, license unspecified | Public, openly licensed |
|
| 26 |
| **Munin-7B-open-pt** (ours) | **Public, openly licensed** | **Public, openly licensed** | **Public, openly licensed** |
|
| 27 |
|
| 28 |
-
|
| 29 |
## Evaluation
|
| 30 |
|
| 31 |
### Performance on Danish
|
|
@@ -42,11 +40,9 @@ We compare Munin-7B-Open-pt at various training stages with its base model [Comm
|
|
| 42 |
and two models from the Pleias family ([Pleias-350M-Preview](https://huggingface.co/PleIAs/Pleias-350m-Preview) and [Pleias-1.2B-Preview](https://huggingface.co/PleIAs/Pleias-1.2b-Preview)).
|
| 43 |
All comparison models were trained exclusively on open data, either in the public domain or under a permissive license.
|
| 44 |
|
| 45 |
-
|
| 46 |
The following tables show the performance on each dataset.
|
| 47 |
For each, we report the respective main metric from EuroEval and the confidence interval.
|
| 48 |
|
| 49 |
-
|
| 50 |
| Model | scala-da (MCC)| dala (MCC) | angry-tweets (MCC) | dansk (Micro F1, No Misc) | danske-talemaader (MCC) | danish-citizen-tests (MCC) | multi-wiki-qa-da (F1) | hellaswag-da (MCC) | nordjylland-news (BERTScore) | average |
|
| 51 |
| ---------------------------- | ------------- | ------------- | ------------------ | ------------------------- | ----------------------- | -------------------------- | --------------------- | ------------------ | ---------------------------- | ------- |
|
| 52 |
| base (comma-v0.1-2t)t | 0.9 ± 0.8 | 0.2 ± 0.6 | 39.8 ± 1.4 | 32.0 ± 2.8 | 3.6 ± 2.3 | 10.7 ± 4.1 | 66.4 ± 0.8 | 3.8 ± 1.0 | 60.2 ± 1.7 | 24.2 |
|
|
@@ -65,7 +61,6 @@ For each, we report the respective main metric from EuroEval and the confidence
|
|
| 65 |
The goal of this section is to demonstrate how the performance deteriorates for English when adapting the model for Danish. Generally, we seem to have only performance degradation
|
| 66 |
across tasks, with the exception of `squad`.
|
| 67 |
|
| 68 |
-
|
| 69 |
| Model | scala-en (MCC) | sst5 (MCC) | conll-en (Micro F1 no misc) | life-in-the-uk (MCC) | squad (F1) | hellaswag (MCC) | cnn-dailymail (BERTScore) | average |
|
| 70 |
| ---------------------------- | ------------- | ------------ | --------------------------- | -------------------- | ------------ | --------------- | ------------------------- | ------- |
|
| 71 |
| base (comma-v0.1-2t) | **29.7** ± 1.9 | **61.8** ± 2.1| **57.5** ± 2.8 | 41.6 ± 2.4 | **90.4** ± 0.4| **16.8** ± 0.6 | **63.3** ± 0.9 | **51.6** |
|
|
@@ -77,14 +72,11 @@ across tasks, with the exception of `squad`.
|
|
| 77 |
| Pleias-350m-Preview | 0.7 ± 1.8 | 15.4 ± 7.3 | 31.8 ± 3.5 | -0.7 ± 2.1 | 31.1 ± 2.3 | 0.2 ± 1.4 | 53.8 ± 1.0 | 18.9 |
|
| 78 |
| Pleias-1.2b-Preview | 1.0 ± 2.4 | 48.2 ± 2.6 | 40.9 ± 3.3 | 2.6 ± 2.8 | 52.9 ± 2.5 | -0.1 ± 1.5 | 60.2 ± 1.6 | 29.4 |
|
| 79 |
|
| 80 |
-
|
| 81 |
-
|
| 82 |
## Training details
|
| 83 |
|
| 84 |
Munin-7B-open-pt is continually pre-trained from [Comma v0.1-2T](https://huggingface.co/common-pile/comma-v0.1-2t) using 30B tokens, utilizing a mix of [Danish Dynaword](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword) and the [Comma v0.1 dataset](https://huggingface.co/datasets/common-pile/comma_v0.1_training_dataset), both comprising only public domain and openly licensed data.
|
| 85 |
|
| 86 |
-
|
| 87 |
-
Munin-7B-open-pt has been trained using the [maester](https://github.com/rlrs/maester/tree/main/3aca26960eaa1a16250b3feda40303c240ba4ca1) framework developed as part of [Danish Foundation Models](https://foundationmodels.dk/). All training was performed on a single 8x NVIDIA B200 node (the first of its kind in Denmark) as part of the [SDU UCloud](https://cloud.sdu.dk/) research cloud.
|
| 88 |
|
| 89 |
The training was performed in three stages, with data mix (open-stageK.py) and maester (open-stageK.toml) configuration files available in each subfolder. The datasets can be created using the `create_dataset.py` script provided in this repository.
|
| 90 |
|
|
@@ -96,8 +88,6 @@ The characteristics of the three pre-training stages are detailed in the followi
|
|
| 96 |
| stage 2 | 524,288 | 18,926 | [subfolder="stage2"](https://huggingface.co/danish-foundation-models/munin-7b-open-pt/tree/main/stage2) | 2/3 [Dynaword](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/tree/9e230b35e31a510e5ab909112ad5bfc9463b2c23); <br> 1/3 [Common-Pile](https://huggingface.co/common-pile/comma_v0.1_training_dataset/5afc546db324e7f39f297ba757c9a60547151e7c) | Excludes depbank, jvj, nordjyllandnews, synne for Dynaword; <br> uses subsets and weighting from [Comma-v0.1-2T](https://huggingface.co/common-pile/comma-v0.1-2t) cooldown phase for Common-Pile; LR schedule with 500 steps warmup, constant 1e-5, 500 steps cooldown |
|
| 97 |
| stage 3 | 524,288 | 18,926 | [subfolder="stage3"](https://huggingface.co/danish-foundation-models/munin-7b-open-pt/tree/main/stage3) | 2/3 [Dynaword](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/tree/9e230b35e31a510e5ab909112ad5bfc9463b2c23); <br> 1/3 [Common-Pile](https://huggingface.co/common-pile/comma_v0.1_training_dataset/5afc546db324e7f39f297ba757c9a60547151e7c) | Excludes depbank, jvj, nordjyllandnews, synne for Dynaword; <br> uses subsets and weighting from [Comma-v0.1-2T](https://huggingface.co/common-pile/comma-v0.1-2t) cooldown phase for Common-Pile; LR schedule with 500 steps warmup, square root decay from 1e-5 |
|
| 98 |
|
| 99 |
-
|
| 100 |
-
|
| 101 |
## Limitations
|
| 102 |
|
| 103 |
Munin-7B-Open-pt was trained only on Danish and English-language data and code from the 15 programming languages covered by the [stack-edu classifiers](https://huggingface.co/collections/HuggingFaceTB/the-ultimate-collection-of-code-classifiers-67b5aa3eb8994a4b71453005).
|
|
@@ -105,20 +95,16 @@ It will likely have poor performance on other languages or programming languages
|
|
| 105 |
|
| 106 |
As a base model, Munin-7B-Open-pt has not been aligned for safety and may, for example, reflect social biases present in its training data or potentially provide toxic or harmful information.
|
| 107 |
|
| 108 |
-
|
| 109 |
## License
|
| 110 |
|
| 111 |
The model is made available under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) open source license. It may therefore be used, modified, distributed, and sublicensed for any purpose, including commercial use, without the licensee having to release their own derivative works under the same permissive terms, provided that users retain copyright and license notices and document any modifications they make.
|
| 112 |
|
| 113 |
-
|
| 114 |
## Project partners & funding
|
| 115 |
|
| 116 |
The development of Munin-7B-Open-pt was performed in a close collaboration between [Aarhus University](https://chc.au.dk/), the [Alexandra Institute](https://alexandra.dk/), and the [University of Southern Denmark](https://www.sdu.dk/en/forskning/machine-learning) as part of [Danish Foundation Models](https://foundationmodels.dk/).
|
| 117 |
|
| 118 |
Funding was provided by the [Danish Ministry of Digital Affairs](https://www.english.digmin.dk/) and the [Danish Ministry of Higher Education and Science](https://ufm.dk/en).
|
| 119 |
|
| 120 |
-
|
| 121 |
-
|
| 122 |
## How to cite
|
| 123 |
|
| 124 |
Coming soon.
|
|
|
|
| 17 |
Munin-7B-open-pt is a base model that can serve as a starting point for fine-tuning and post-training.
|
| 18 |
It has not been instruction-tuned and cannot directly be expected to function as a chat model.
|
| 19 |
|
|
|
|
| 20 |
| Model | Model Weights | Training Data | Training Code |
|
| 21 |
|:------|:--------------|:--------------|:--------------|
|
| 22 |
| Llama | Public with custom license | Private | Private |
|
|
|
|
| 24 |
| Apertus | Public, openly licensed | Reproducible, license unspecified | Public, openly licensed |
|
| 25 |
| **Munin-7B-open-pt** (ours) | **Public, openly licensed** | **Public, openly licensed** | **Public, openly licensed** |
|
| 26 |
|
|
|
|
| 27 |
## Evaluation
|
| 28 |
|
| 29 |
### Performance on Danish
|
|
|
|
| 40 |
and two models from the Pleias family ([Pleias-350M-Preview](https://huggingface.co/PleIAs/Pleias-350m-Preview) and [Pleias-1.2B-Preview](https://huggingface.co/PleIAs/Pleias-1.2b-Preview)).
|
| 41 |
All comparison models were trained exclusively on open data, either in the public domain or under a permissive license.
|
| 42 |
|
|
|
|
| 43 |
The following tables show the performance on each dataset.
|
| 44 |
For each, we report the respective main metric from EuroEval and the confidence interval.
|
| 45 |
|
|
|
|
| 46 |
| Model | scala-da (MCC)| dala (MCC) | angry-tweets (MCC) | dansk (Micro F1, No Misc) | danske-talemaader (MCC) | danish-citizen-tests (MCC) | multi-wiki-qa-da (F1) | hellaswag-da (MCC) | nordjylland-news (BERTScore) | average |
|
| 47 |
| ---------------------------- | ------------- | ------------- | ------------------ | ------------------------- | ----------------------- | -------------------------- | --------------------- | ------------------ | ---------------------------- | ------- |
|
| 48 |
| base (comma-v0.1-2t)t | 0.9 ± 0.8 | 0.2 ± 0.6 | 39.8 ± 1.4 | 32.0 ± 2.8 | 3.6 ± 2.3 | 10.7 ± 4.1 | 66.4 ± 0.8 | 3.8 ± 1.0 | 60.2 ± 1.7 | 24.2 |
|
|
|
|
| 61 |
The goal of this section is to demonstrate how the performance deteriorates for English when adapting the model for Danish. Generally, we seem to have only performance degradation
|
| 62 |
across tasks, with the exception of `squad`.
|
| 63 |
|
|
|
|
| 64 |
| Model | scala-en (MCC) | sst5 (MCC) | conll-en (Micro F1 no misc) | life-in-the-uk (MCC) | squad (F1) | hellaswag (MCC) | cnn-dailymail (BERTScore) | average |
|
| 65 |
| ---------------------------- | ------------- | ------------ | --------------------------- | -------------------- | ------------ | --------------- | ------------------------- | ------- |
|
| 66 |
| base (comma-v0.1-2t) | **29.7** ± 1.9 | **61.8** ± 2.1| **57.5** ± 2.8 | 41.6 ± 2.4 | **90.4** ± 0.4| **16.8** ± 0.6 | **63.3** ± 0.9 | **51.6** |
|
|
|
|
| 72 |
| Pleias-350m-Preview | 0.7 ± 1.8 | 15.4 ± 7.3 | 31.8 ± 3.5 | -0.7 ± 2.1 | 31.1 ± 2.3 | 0.2 ± 1.4 | 53.8 ± 1.0 | 18.9 |
|
| 73 |
| Pleias-1.2b-Preview | 1.0 ± 2.4 | 48.2 ± 2.6 | 40.9 ± 3.3 | 2.6 ± 2.8 | 52.9 ± 2.5 | -0.1 ± 1.5 | 60.2 ± 1.6 | 29.4 |
|
| 74 |
|
|
|
|
|
|
|
| 75 |
## Training details
|
| 76 |
|
| 77 |
Munin-7B-open-pt is continually pre-trained from [Comma v0.1-2T](https://huggingface.co/common-pile/comma-v0.1-2t) using 30B tokens, utilizing a mix of [Danish Dynaword](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword) and the [Comma v0.1 dataset](https://huggingface.co/datasets/common-pile/comma_v0.1_training_dataset), both comprising only public domain and openly licensed data.
|
| 78 |
|
| 79 |
+
Munin-7B-open-pt has been trained using the [maester](https://github.com/rlrs/maester) framework developed as part of [Danish Foundation Models](https://foundationmodels.dk/). All training was performed on a single 8x NVIDIA B200 node (the first of its kind in Denmark) as part of the [SDU UCloud](https://cloud.sdu.dk/) research cloud.
|
|
|
|
| 80 |
|
| 81 |
The training was performed in three stages, with data mix (open-stageK.py) and maester (open-stageK.toml) configuration files available in each subfolder. The datasets can be created using the `create_dataset.py` script provided in this repository.
|
| 82 |
|
|
|
|
| 88 |
| stage 2 | 524,288 | 18,926 | [subfolder="stage2"](https://huggingface.co/danish-foundation-models/munin-7b-open-pt/tree/main/stage2) | 2/3 [Dynaword](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/tree/9e230b35e31a510e5ab909112ad5bfc9463b2c23); <br> 1/3 [Common-Pile](https://huggingface.co/common-pile/comma_v0.1_training_dataset/5afc546db324e7f39f297ba757c9a60547151e7c) | Excludes depbank, jvj, nordjyllandnews, synne for Dynaword; <br> uses subsets and weighting from [Comma-v0.1-2T](https://huggingface.co/common-pile/comma-v0.1-2t) cooldown phase for Common-Pile; LR schedule with 500 steps warmup, constant 1e-5, 500 steps cooldown |
|
| 89 |
| stage 3 | 524,288 | 18,926 | [subfolder="stage3"](https://huggingface.co/danish-foundation-models/munin-7b-open-pt/tree/main/stage3) | 2/3 [Dynaword](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/tree/9e230b35e31a510e5ab909112ad5bfc9463b2c23); <br> 1/3 [Common-Pile](https://huggingface.co/common-pile/comma_v0.1_training_dataset/5afc546db324e7f39f297ba757c9a60547151e7c) | Excludes depbank, jvj, nordjyllandnews, synne for Dynaword; <br> uses subsets and weighting from [Comma-v0.1-2T](https://huggingface.co/common-pile/comma-v0.1-2t) cooldown phase for Common-Pile; LR schedule with 500 steps warmup, square root decay from 1e-5 |
|
| 90 |
|
|
|
|
|
|
|
| 91 |
## Limitations
|
| 92 |
|
| 93 |
Munin-7B-Open-pt was trained only on Danish and English-language data and code from the 15 programming languages covered by the [stack-edu classifiers](https://huggingface.co/collections/HuggingFaceTB/the-ultimate-collection-of-code-classifiers-67b5aa3eb8994a4b71453005).
|
|
|
|
| 95 |
|
| 96 |
As a base model, Munin-7B-Open-pt has not been aligned for safety and may, for example, reflect social biases present in its training data or potentially provide toxic or harmful information.
|
| 97 |
|
|
|
|
| 98 |
## License
|
| 99 |
|
| 100 |
The model is made available under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) open source license. It may therefore be used, modified, distributed, and sublicensed for any purpose, including commercial use, without the licensee having to release their own derivative works under the same permissive terms, provided that users retain copyright and license notices and document any modifications they make.
|
| 101 |
|
|
|
|
| 102 |
## Project partners & funding
|
| 103 |
|
| 104 |
The development of Munin-7B-Open-pt was performed in a close collaboration between [Aarhus University](https://chc.au.dk/), the [Alexandra Institute](https://alexandra.dk/), and the [University of Southern Denmark](https://www.sdu.dk/en/forskning/machine-learning) as part of [Danish Foundation Models](https://foundationmodels.dk/).
|
| 105 |
|
| 106 |
Funding was provided by the [Danish Ministry of Digital Affairs](https://www.english.digmin.dk/) and the [Danish Ministry of Higher Education and Science](https://ufm.dk/en).
|
| 107 |
|
|
|
|
|
|
|
| 108 |
## How to cite
|
| 109 |
|
| 110 |
Coming soon.
|