AraLingBench / README.md
hammh0a's picture
Update README.md
28ec0ba verified
metadata
dataset_name: AraLingBench
pretty_name: AraLingBench
tags:
  - arabic
  - evaluation
  - multiple-choice
  - question-answering
language:
  - ar
task_categories:
  - question-answering
size_categories:
  - n<1K

AraLingBench

๐Ÿ“„ Paper: arXiv:2511.14295
๐Ÿ’ป GitHub: hammoudhasan/AraLingBench

AraLingBench is a 150-question Arabic multiple-choice benchmark that tests core linguistic competence of language models across five pillars:

  • ุงู„ู†ุญูˆ (Grammar)
  • ุงู„ุตุฑู (Morphology)
  • ุงู„ุฅู…ู„ุงุก (Spelling & Orthography)
  • ูู‡ู… ุงู„ู„ุบุฉ (Reading Comprehension)
  • ุงู„ุชุฑูƒูŠุจ ุงู„ู„ุบูˆูŠ ูˆุงู„ุฃุณู„ูˆุจูŠ (Syntax & Stylistics)

All questions are human-authored and validated, with a single correct answer and a difficulty label: Easy, Medium, or Hard.

Data Fields

Each example has:

  • label (str) โ€” linguistic category
  • context (str) โ€” optional supporting text (may be empty)
  • question (str) โ€” question in Arabic
  • options (List[str]) โ€” answer choices
  • answer (str) โ€” correct choice (matches one of options)
  • difficulty (str) โ€” one of Easy, Medium, Hard

Single split:

  • train โ€” 150 examples (use as an evaluation set)

Usage

from datasets import load_dataset

ds = load_dataset("hammh0a/AraLingBench")
example = ds["train"][0]

print(example["label"])
print(example["question"])
print(example["options"])
print(example["answer"])

Citation

If you use AraLingBench, please cite:

@article{zbib2025aralingbench,
  title        = {AraLingBench: A Human-Annotated Benchmark for Evaluating Arabic Linguistic Capabilities of Large Language Models},
  author       = {Mohammad Zbib and Hasan Abed Al Kader Hammoud and Sina Mukalled and Nadine Rizk and Fatima Karnib and Issam Lakkis and Ammar Mohanna and Bernard Ghanem},
  journal      = {arXiv preprint arXiv:2511.14295},
  year         = {2025},
  url          = {https://arxiv.org/abs/2511.14295}
}