File size: 3,013 Bytes
46ab1f3 ec069a0 46ab1f3 3c25c24 46ab1f3 3c25c24 46ab1f3 ec069a0 46ab1f3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 107402461357
num_examples: 431867387
download_size: 63321627068
dataset_size: 107402461357
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- da
size_categories:
- 100M<n<1B
license: unknown
---
# Details
**SnakModel** is a 7B-parameter, autoregressive language model specifically designed for Danish. There are both an instruction-tuned variant, as well as a base version for further fine-tuning. Our models build upon [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf), which we continuously pre-train on a diverse collection of Danish corpora.
**Developers**
[**🧭 NLPnorth** research unit](https://nlpnorth.github.io) at the [IT University of Copenhagen](https://itu.dk), Denmark.
[**🌊 AAU-NLP** research unit](https://aaunlp.github.io) at [Aalborg University Copenhagen](https://aau.dk), Denmark.
[Mike Zhang](https://jjzha.github.io)\*, [Max Müller-Eberstein](https://mxij.me)\*, [Elisa Bassignana](http://elisabassignana.github.io), [Rob van der Goot](https://robvanderg.github.io).
\*equal contribution.
## Deduplication
**Important Note**: In this data, we removed two sources, namely DaNewsroom and FTSpeech. The reason is because the licenses are not clear.
The hyperparameters of **this** particular pretraining set:
```
--seed 42 \
--batch_size 1024 \
--num_perm 64 \
--threshold 0.85 \
--hash_bits 32 \
--num_proc 16 \
```
The hyperparameters of the **original** pretraining set:
```
--seed 42 \
--batch_size 4096 \
--num_perm 128 \
--threshold 0.85 \
```
Note how the number of permutations is lower, a lower batch size and we have lower hash bits `64 -> 32`.
We encountered several OOM errors that were a bit inexplicable and decided to lower the memory footprint in this way.
The hardware we used was a machine with 128 cores and 1TB of RAM. This data should take less than 100GB of disk space.
## Licenses
For licensing of the data, we refer to the paper. Specifically for the Twitter data, we made an assumption that the data has been extracted using a version of the Twitter API which is usually MIT-licensed, to consider it as "an upper-bound".
Though, we understand that the software/code is usually MIT-licensed. Feel free to leave the Twitter data out. Other Twitter datasets here on HF seem to have a flavor of cc-by*.
## Citation
If you find the work in this repository useful, please don't forget to cite:
```bibtex
@inproceedings{snakmodel,
title={{S}nak{M}odel: Lessons Learned from Training an Open Danish Large Language Model},
author={Mike Zhang and Max M{\"u}ller-Eberstein and Elisa Bassignana and Rob van der Goot},
booktitle={The Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies},
year={2024},
url={https://openreview.net/forum?id=YxzfgQGpRQ}
}
``` |