Datasets:

Modalities:
Text
Formats:
json
Languages:
Arabic
Size:
< 1K
Libraries:
Datasets
Dask
License:
ArDQA / README.md
MahaJar's picture
Update README.md
b086b0d verified
metadata
license: other
license_name: creative-commons-attribution-noncommercial-noderivatives-4-0-international
license_link: https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode
task_categories:
  - question-answering
language:
  - ar
tags:
  - arabic
  - cross-dialect
  - parallel
  - extractive-qa
  - squad-format
  - msa
  - egyptian-arabic
  - gulf-arabic
  - levantine-arabic
  - maghrebi-arabic
  - vlogs
  - narratives
  - curated
  - evaluation-benchmark
  - cross-lingual-transfer
pretty_name: 'ArDQA: Cross-Dialectal Arabic QA Benchmark'
size_categories:
  - 1K<n<10K

Dataset Card for ArDQA

ArDQA is a cross-dialect Arabic QA benchmark spanning three domains. Each domain provides parallel QA triples {context, question, answer} across five Arabic varieties: MSA, Egyptian, Gulf, Levantine, Maghrebi. The benchmark contains 8,150 QA triples overall and is designed for evaluation of cross-dialectal transfer in Arabic extractive QA.

Dataset Details

Dataset Description

  • Curated by: Native-speaker annotators (see Annotation section).
  • Funded by [optional]: N/A.
  • Language(s) (NLP): Arabic (MSA, dialects: Egyptian, Gulf, Levantine, Maghrebi).
  • License: CC BY-NC-ND 4.0
    Research/teaching use, attribution required, no commercial use, no derivatives.
    Legal text: https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode

Composition

  • ArDQA-SQuAD: Curated from Arabic-SQuAD v2.0, then translated by native speakers into four dialects with manual span annotation to preserve one-to-one alignment.
  • ArDQA-Vlogs: Colloquial lifestyle vlog transcripts --> QA construction --> dialect translations --> manual span annotation.
  • ArDQA-Narratives: Cultural narratives and folklore from online videos, following the same pipeline as Vlogs, with longer, descriptive answers.

Quality control

Native speakers translated independently in every domain, cross-checked each other, and an expert adjudicated disagreements. Span consistency was validated (using answer-to-context length ratios) to maintain strict alignment across dialects.

Paper

Under Review

Direct Use

  • Evaluation of zero-shot and few-shot cross-dialectal transfer in Arabic QA.
  • Analysis of dialectal robustness for Arabic extractive QA models.
  • Benchmarking domain sensitivity across SQuAD-like, vlog, and narrative content.

Dataset Structure

Format

ArDQA follows SQuAD v2.0 JSON:

root
β”œβ”€β”€ data: [
β”‚   β”œβ”€β”€ {
β”‚   β”‚   β”œβ”€β”€ title: string
β”‚   β”‚   └── paragraphs: [
β”‚   β”‚       β”œβ”€β”€ {
β”‚   β”‚       β”‚   β”œβ”€β”€ context: string
β”‚   β”‚       β”‚   └── qas: [
β”‚   β”‚       β”‚       β”œβ”€β”€ {
β”‚   β”‚       β”‚       β”‚   id: string
β”‚   β”‚       β”‚       β”‚   question: string
β”‚   β”‚       β”‚       β”‚   is_impossible: boolean
β”‚   β”‚       β”‚       β”‚   answers: [
β”‚   β”‚       β”‚       β”‚       β”œβ”€β”€ { text: string, answer_start: int }
β”‚   β”‚       β”‚       β”‚       └── ...
β”‚   β”‚       β”‚       └── ...
β”‚   β”‚       └── ...
β”‚   └── ...
└── (optional) version: string

Splits

Each ArDQA domain is divided into development and test splits to enable zero-shot evaluation (train on MSA or other sources, then evaluate on dialects without target-dialect fine-tuning).

Counts per domain and split

fold ArDQA-SQuAD (# parallel / # total) ArDQA-Vlogs (# parallel / # total) ArDQA-Narratives (# parallel / # total)
dev 131 / 655 171 / 855 160 / 800
test 368 / 1,840 436 / 2,180 364 / 1,820
  • # parallel = Number of {context, question, answer} triples aligned across all five Arabic varieties.
  • # total = # parallel Γ— 5 dialects (MSA, Egyptian, Gulf, Levantine, Maghrebi).
  • Totals across all domains: dev = 2,310, test = 5,840, overall = 8,150 QA triples.

Source Data

Original texts come from Arabic-SQuAD v2.0 and public online video transcripts (vlogs, narratives). QA items and dialect translations were produced by native-speaker annotators.

Annotations

Annotation process

  • Native speakers independently translate and annotate spans.
  • Cross-review and expert adjudication.
  • Consistency checks (e.g., answer length vs. context, span alignment across dialects).

Experiment (brief)

We evaluate zero-shot cross-dialectal transfer by training only on MSA (Arabic-SQuAD v2.0) and testing zero-shot on dialectal data.

  • Models: AraELECTRA-MSA-QA, CAMeLBERT-MSA-QA, AraBERT-MSA-QA.
  • Data: ArDQA dev/test across three domains (SQuAD, Vlogs, Narratives) and five varieties: MSA, Egyptian (EGY), Gulf (GLF), Levantine (LEV), Maghrebi (MGR).

Model References

Evaluation Metrics

  • EM (Exact Match): 1 if the predicted span matches the gold answer exactly; else 0.
  • F1: token-level harmonic mean of precision and recall between predicted and gold spans (rewards partial overlap).

Reference Results (Zero-Shot Cross-Dialectal Transfer)

ArDQA-SQuAD (F1 / EM)

Model EGY GLF LEV MGR MSA
AraELECTRA-MSA-QA 71.66 / 59.51 73.76 / 60.87 66.72 / 50.54 66.35 / 53.80 76.19 / 61.96
CAMeLBERT-MSA-QA 53.98 / 28.04 54.91 / 26.68 51.49 / 25.86 46.90 / 23.96 60.27 / 29.13
AraBERT-MSA-QA 12.53 / 4.04 11.01 / 3.74 12.01 / 3.88 12.06 / 3.54 11.80 / 3.74

ArDQA-Vlogs (F1 / EM)

Model EGY GLF LEV MGR MSA
AraELECTRA-MSA-QA 63.90 / 37.93 64.47 / 41.74 63.00 / 40.37 57.11 / 31.19 67.01 / 42.20
CAMeLBERT-MSA-QA 40.66 / 15.09 39.12 / 14.63 37.50 / 14.17 29.69 / 10.04 46.66 / 16.01
AraBERT-MSA-QA 13.08 / 4.03 11.18 / 4.03 12.03 / 4.45 12.49 / 4.03 11.53 / 4.68

ArDQA-Narratives (F1 / EM)

Model EGY GLF LEV MGR MSA
AraELECTRA-MSA-QA 35.75 / 11.26 40.80 / 14.20 38.31 / 12.98 31.70 / 6.87 43.83 / 14.05
CAMeLBERT-MSA-QA 22.33 / 5.82 25.53 / 8.02 20.74 / 6.56 23.82 / 7.47 25.13 / 9.68
AraBERT-MSA-QA 16.20 / 4.01 16.82 / 4.27 15.41 / 4.01 18.72 / 4.27 18.07 / 4.01

Citation

If you use ArDQA, please cite the dataset

Dataset

BibTeX

@dataset{ardqa_dataset_2025,
  title   = {ArDQA: Cross-Dialect Arabic QA Benchmark},
  authot =  {Althobaiti, Maha Jarallah}
  year    = {2025},
  note    = {Hugging Face dataset, CC BY-NC-ND 4.0},
  url     = {https://huggingface.co/datasets/MahaJar/ArDQA}
}