| | --- |
| | dataset_info: |
| | features: |
| | - name: text |
| | dtype: string |
| | - name: source |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 107402461357 |
| | num_examples: 431867387 |
| | download_size: 63321627068 |
| | dataset_size: 107402461357 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | language: |
| | - da |
| | size_categories: |
| | - 100M<n<1B |
| | license: unknown |
| | --- |
| | |
| | # Details |
| |
|
| | **SnakModel** is a 7B-parameter, autoregressive language model specifically designed for Danish. There are both an instruction-tuned variant, as well as a base version for further fine-tuning. Our models build upon [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf), which we continuously pre-train on a diverse collection of Danish corpora. |
| |
|
| | **Developers** |
| |
|
| | [**🧭 NLPnorth** research unit](https://nlpnorth.github.io) at the [IT University of Copenhagen](https://itu.dk), Denmark. |
| | [**🌊 AAU-NLP** research unit](https://aaunlp.github.io) at [Aalborg University Copenhagen](https://aau.dk), Denmark. |
| |
|
| | [Mike Zhang](https://jjzha.github.io)\*, [Max Müller-Eberstein](https://mxij.me)\*, [Elisa Bassignana](http://elisabassignana.github.io), [Rob van der Goot](https://robvanderg.github.io). |
| | \*equal contribution. |
| | |
| | |
| | ## Deduplication |
| | |
| | **Important Note**: In this data, we removed two sources, namely DaNewsroom and FTSpeech. The reason is because the licenses are not clear. |
| | |
| | The hyperparameters of **this** particular pretraining set: |
| | |
| | ``` |
| | --seed 42 \ |
| | --batch_size 1024 \ |
| | --num_perm 64 \ |
| | --threshold 0.85 \ |
| | --hash_bits 32 \ |
| | --num_proc 16 \ |
| | ``` |
| | |
| | The hyperparameters of the **original** pretraining set: |
| | |
| | ``` |
| | --seed 42 \ |
| | --batch_size 4096 \ |
| | --num_perm 128 \ |
| | --threshold 0.85 \ |
| | ``` |
| | |
| | Note how the number of permutations is lower, a lower batch size and we have lower hash bits `64 -> 32`. |
| | We encountered several OOM errors that were a bit inexplicable and decided to lower the memory footprint in this way. |
| | The hardware we used was a machine with 128 cores and 1TB of RAM. This data should take less than 100GB of disk space. |
| | |
| | ## Licenses |
| | |
| | For licensing of the data, we refer to the paper. Specifically for the Twitter data, we made an assumption that the data has been extracted using a version of the Twitter API which is usually MIT-licensed, to consider it as "an upper-bound". |
| | Though, we understand that the software/code is usually MIT-licensed. Feel free to leave the Twitter data out. Other Twitter datasets here on HF seem to have a flavor of cc-by*. |
| |
|
| | ## Citation |
| |
|
| | If you find the work in this repository useful, please don't forget to cite: |
| |
|
| | ```bibtex |
| | @inproceedings{snakmodel, |
| | title={{S}nak{M}odel: Lessons Learned from Training an Open Danish Large Language Model}, |
| | author={Mike Zhang and Max M{\"u}ller-Eberstein and Elisa Bassignana and Rob van der Goot}, |
| | booktitle={The Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies}, |
| | year={2024}, |
| | url={https://openreview.net/forum?id=YxzfgQGpRQ} |
| | } |
| | ``` |