metadata
dataset_name: AraLingBench
pretty_name: AraLingBench
tags:
- arabic
- evaluation
- multiple-choice
- question-answering
language:
- ar
task_categories:
- question-answering
size_categories:
- n<1K
AraLingBench
๐ Paper: arXiv:2511.14295
๐ป GitHub: hammoudhasan/AraLingBench
AraLingBench is a 150-question Arabic multiple-choice benchmark that tests core linguistic competence of language models across five pillars:
- ุงููุญู (Grammar)
- ุงูุตุฑู (Morphology)
- ุงูุฅู ูุงุก (Spelling & Orthography)
- ููู ุงููุบุฉ (Reading Comprehension)
- ุงูุชุฑููุจ ุงููุบูู ูุงูุฃุณููุจู (Syntax & Stylistics)
All questions are human-authored and validated, with a single correct answer and a difficulty label: Easy, Medium, or Hard.
Data Fields
Each example has:
label(str) โ linguistic categorycontext(str) โ optional supporting text (may be empty)question(str) โ question in Arabicoptions(List[str]) โ answer choicesanswer(str) โ correct choice (matches one ofoptions)difficulty(str) โ one ofEasy,Medium,Hard
Single split:
trainโ 150 examples (use as an evaluation set)
Usage
from datasets import load_dataset
ds = load_dataset("hammh0a/AraLingBench")
example = ds["train"][0]
print(example["label"])
print(example["question"])
print(example["options"])
print(example["answer"])
Citation
If you use AraLingBench, please cite:
@article{zbib2025aralingbench,
title = {AraLingBench: A Human-Annotated Benchmark for Evaluating Arabic Linguistic Capabilities of Large Language Models},
author = {Mohammad Zbib and Hasan Abed Al Kader Hammoud and Sina Mukalled and Nadine Rizk and Fatima Karnib and Issam Lakkis and Ammar Mohanna and Bernard Ghanem},
journal = {arXiv preprint arXiv:2511.14295},
year = {2025},
url = {https://arxiv.org/abs/2511.14295}
}