MixMinMatch
Collection
Collection of datasets from MixMinMatch work. • 16 items • Updated • 2
How to use AdaMLLab/XLM-RoBERTa-Arabic-Quality-Classifier with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="AdaMLLab/XLM-RoBERTa-Arabic-Quality-Classifier", trust_remote_code=True) # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("AdaMLLab/XLM-RoBERTa-Arabic-Quality-Classifier", trust_remote_code=True, dtype="auto")A text quality classifier for Arabic pretraining data, trained from XLM-RoBERTa. This model reproduces the FineWeb2-HQ approach (Messmer et al., 2025) for Arabic, as the original authors did not release their trained classifiers but did release their code.
For improved Arabic performance and inference speed, see mmBERT-Arabic-Quality-Classifier.
from transformers import pipeline
classifier = pipeline("text-classification", model="AdaMLLab/XLM-RoBERTa-Arabic-Quality-Classifier")
result = classifier("النص العربي هنا")
@misc{messmer2025fineweb2hq,
title={Enhancing Multilingual LLM Pretraining with Model-Based Data Selection},
author={Bettina Messmer and Vinko Sabolčec and Martin Jaggi},
year={2025},
eprint={2502.10361},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.10361},
}
@misc{alrashed2025mixminmatch,
title={Mix, MinHash, and Match: Cross-Source Agreement for Multilingual Pretraining Datasets},
author={Sultan Alrashed and Francesco Orabona},
year={2025},
eprint={2512.18834v2},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.18834v2},
}
Base model
FacebookAI/xlm-roberta-base