Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Arabic
ArXiv:
License:

Add task category, license, and paper metadata

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -1,4 +1,10 @@
1
  ---
 
 
 
 
 
 
2
  configs:
3
  - config_name: minhash_deduped
4
  data_files:
@@ -12,14 +18,12 @@ configs:
12
  data_files:
13
  - split: train
14
  path: data/sentence_deduped/*
15
- default: minhash
16
- language:
17
- - ar
18
  ---
19
 
20
  <img src="https://huggingface.co/datasets/AdaMLLab/AraMix/resolve/main/finetasks_arabic_main_results.png" width="900" alt="Finetasks benchmark scores, showing AraMix-Matched as SOTA.">
21
 
22
- AraMix (https://arxiv.org/abs/2512.18834v2) is an Arabic pretraining corpus containing 178 billion tokens across 179 million documents (in the minhash subset). Rather than scraping the web again, AraMix combines seven publicly available Arabic datasets, applies Arabic-specific quality filtering, and performs cross-dataset deduplication.
23
 
24
  We train a 1.4B parameter language model through nanotron on 30 billion tokens to show that the `consensus` subset of AraMix outperforms the previous state-of-the-art, [arabicweb24](https://huggingface.co/datasets/lightonai/ArabicWeb24) (see [Appendix A9 in the Fineweb-2 paper](https://arxiv.org/pdf/2506.20920)) while having more total tokens. Furthermore, the `minhash_deduped` subset performs on-par with nearly 5 times the total number of tokens.
25
 
@@ -41,7 +45,7 @@ The consensus subset uses cross-dataset agreement as a signal for quality.
41
  from datasets import load_dataset
42
 
43
  ds = load_dataset("AdaMLLab/AraMix", "sentence_deduped")
44
- ds = load_dataset("AdaMLLab/AraMix", "minhash")
45
  ds = load_dataset("AdaMLLab/AraMix", "matched")
46
  ```
47
 
 
1
  ---
2
+ language:
3
+ - ar
4
+ license: other
5
+ task_categories:
6
+ - text-generation
7
+ arxiv: 2512.18834
8
  configs:
9
  - config_name: minhash_deduped
10
  data_files:
 
18
  data_files:
19
  - split: train
20
  path: data/sentence_deduped/*
21
+ default: minhash_deduped
 
 
22
  ---
23
 
24
  <img src="https://huggingface.co/datasets/AdaMLLab/AraMix/resolve/main/finetasks_arabic_main_results.png" width="900" alt="Finetasks benchmark scores, showing AraMix-Matched as SOTA.">
25
 
26
+ AraMix ([https://arxiv.org/abs/2512.18834](https://arxiv.org/abs/2512.18834)) is an Arabic pretraining corpus containing 178 billion tokens across 179 million documents (in the minhash subset). Rather than scraping the web again, AraMix combines seven publicly available Arabic datasets, applies Arabic-specific quality filtering, and performs cross-dataset deduplication.
27
 
28
  We train a 1.4B parameter language model through nanotron on 30 billion tokens to show that the `consensus` subset of AraMix outperforms the previous state-of-the-art, [arabicweb24](https://huggingface.co/datasets/lightonai/ArabicWeb24) (see [Appendix A9 in the Fineweb-2 paper](https://arxiv.org/pdf/2506.20920)) while having more total tokens. Furthermore, the `minhash_deduped` subset performs on-par with nearly 5 times the total number of tokens.
29
 
 
45
  from datasets import load_dataset
46
 
47
  ds = load_dataset("AdaMLLab/AraMix", "sentence_deduped")
48
+ ds = load_dataset("AdaMLLab/AraMix", "minhash_deduped")
49
  ds = load_dataset("AdaMLLab/AraMix", "matched")
50
  ```
51