Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,7 @@ language:
|
|
| 19 |
|
| 20 |
# AraMix
|
| 21 |
|
| 22 |
-
AraMix is a deduplicated Arabic pretraining corpus containing 158.8 billion tokens across 167.6 million documents. Rather than scraping the web again, AraMix combines seven publicly available Arabic datasets, applies Arabic-specific quality filtering, and performs cross-dataset deduplication.
|
| 23 |
|
| 24 |
## Subsets
|
| 25 |
|
|
@@ -59,7 +59,15 @@ ds = load_dataset("SultanR/AraMix", "consensus")
|
|
| 59 |
|
| 60 |
## Citation
|
| 61 |
|
| 62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
## License
|
| 65 |
|
|
|
|
| 19 |
|
| 20 |
# AraMix
|
| 21 |
|
| 22 |
+
AraMix (https://arxiv.org/abs/2512.18834v1) is a deduplicated Arabic pretraining corpus containing 158.8 billion tokens across 167.6 million documents. Rather than scraping the web again, AraMix combines seven publicly available Arabic datasets, applies Arabic-specific quality filtering, and performs cross-dataset deduplication.
|
| 23 |
|
| 24 |
## Subsets
|
| 25 |
|
|
|
|
| 59 |
|
| 60 |
## Citation
|
| 61 |
|
| 62 |
+
@misc{alrashed2025aramixrecyclingrefilteringdeduplicating,
|
| 63 |
+
title={AraMix: Recycling, Refiltering, and Deduplicating to Deliver the Largest Arabic Pretraining Corpus},
|
| 64 |
+
author={Sultan Alrashed and Francesco Orabona},
|
| 65 |
+
year={2025},
|
| 66 |
+
eprint={2512.18834},
|
| 67 |
+
archivePrefix={arXiv},
|
| 68 |
+
primaryClass={cs.CL},
|
| 69 |
+
url={https://arxiv.org/abs/2512.18834},
|
| 70 |
+
}
|
| 71 |
|
| 72 |
## License
|
| 73 |
|