Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
nielsr HF Staff commited on
Commit
fbcb5c8
·
verified ·
1 Parent(s): ba04813

Improve dataset card with task categories, links, and metadata

Browse files

This PR improves the dataset card by:

- Adding `text-generation` and `translation` to the `task_categories` metadata to improve discoverability.
- Ensuring the paper link ([Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data](https://arxiv.org/abs/2506.00469)) is included.
- Adding a link to the project page ([https://mala-lm.github.io/emma-500-gen2](https://mala-lm.github.io/emma-500-gen2)).
- Specifying the license as `odc-by`.


These additions enhance the dataset's discoverability and provide users with comprehensive information.

Files changed (1) hide show
  1. README.md +14 -8
README.md CHANGED
@@ -1,11 +1,18 @@
1
  ---
2
  license: odc-by
 
 
 
 
 
3
  ---
4
 
5
  # MaLA Corpus: Massive Language Adaptation Corpus
6
 
7
  This is the noisy version that integrates texts from different sources.
8
 
 
 
9
  ## Dataset Summary
10
 
11
  The **MaLA Corpus** (Massive Language Adaptation) is a comprehensive, multilingual dataset designed to support the continual pre-training of large language models. It covers **939 languages** and consists of over **74 billion tokens**, making it one of the largest datasets of its kind. With a focus on improving the representation of low-resource languages, the MaLA Corpus is a critical resource for advancing multilingual models, particularly those aimed at serving underrepresented languages.
@@ -56,12 +63,12 @@ We will comply with legitimate requests by removing the affected sources from th
56
  ## Citation
57
 
58
  ```
59
- @article{ji2024emma500enhancingmassivelymultilingual,
60
- title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
61
- author={Shaoxiong Ji and Zihao Li and Indraneil Paul and Jaakko Paavola and Peiqin Lin and Pinzhen Chen and Dayyán O'Brien and Hengyu Luo and Hinrich Schütze and Jörg Tiedemann and Barry Haddow},
62
- year={2024},
63
- journal={arXiv preprint 2409.17892},
64
- url={https://arxiv.org/abs/2409.17892},
65
  }
66
  ```
67
 
@@ -69,5 +76,4 @@ We will comply with legitimate requests by removing the affected sources from th
69
 
70
  We extend our thanks to the language communities and contributors who helped source, clean, and validate the diverse data used in the MaLA Corpus. Their efforts are invaluable in supporting linguistic diversity in AI research.
71
 
72
- This work is done by researchers at [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) in collaboration with partners from TU Darmstadt, the University of Edinburgh, and LMU Munich. It is funded by [HPLT](https://hplt-project.org) and [UTTER](https://he-utter.eu).
73
-
 
1
  ---
2
  license: odc-by
3
+ task_categories:
4
+ - text-generation
5
+ - translation
6
+ language:
7
+ - multilingual
8
  ---
9
 
10
  # MaLA Corpus: Massive Language Adaptation Corpus
11
 
12
  This is the noisy version that integrates texts from different sources.
13
 
14
+ [Project Page](https://mala-lm.github.io/emma-500-gen2)
15
+
16
  ## Dataset Summary
17
 
18
  The **MaLA Corpus** (Massive Language Adaptation) is a comprehensive, multilingual dataset designed to support the continual pre-training of large language models. It covers **939 languages** and consists of over **74 billion tokens**, making it one of the largest datasets of its kind. With a focus on improving the representation of low-resource languages, the MaLA Corpus is a critical resource for advancing multilingual models, particularly those aimed at serving underrepresented languages.
 
63
  ## Citation
64
 
65
  ```
66
+ @article{ji2025emma2,
67
+ title={Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data},
68
+ author={Shaoxiong Ji and Zihao Li and Jaakko Paavola and Indraneil Paul and Hengyu Luo and Jörg Tiedemann},
69
+ year={2025},
70
+ journal={arXiv preprint 2506.00469},
71
+ url={https://arxiv.org/abs/2506.00469},
72
  }
73
  ```
74
 
 
76
 
77
  We extend our thanks to the language communities and contributors who helped source, clean, and validate the diverse data used in the MaLA Corpus. Their efforts are invaluable in supporting linguistic diversity in AI research.
78
 
79
+ This work is done by researchers at [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) in collaboration with partners from TU Darmstadt, the University of Edinburgh, and LMU Munich. It is funded by [HPLT](https://hplt-project.org) and [UTTER](https://he-utter.eu).