The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
question_code: string
category: string
subcategory: string
question_text: string
answer_text: string
response_time_ms: int64
quality_score: int64
country: string
answered_at: string
quality_grade: string
speaker_hash: string
text: null
dialect_group: null
msa_text: null
context: null
to
{'text': Value('string'), 'category': Value('string'), 'country': Value('string'), 'dialect_group': Value('string'), 'quality_score': Value('int32'), 'msa_text': Value('string'), 'context': Value('string'), 'speaker_hash': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2092, in _iter_arrow
pa_table = cast_table_to_features(pa_table, self.features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
raise CastError(
datasets.table.CastError: Couldn't cast
question_code: string
category: string
subcategory: string
question_text: string
answer_text: string
response_time_ms: int64
quality_score: int64
country: string
answered_at: string
quality_grade: string
speaker_hash: string
text: null
dialect_group: null
msa_text: null
context: null
to
{'text': Value('string'), 'category': Value('string'), 'country': Value('string'), 'dialect_group': Value('string'), 'quality_score': Value('int32'), 'msa_text': Value('string'), 'context': Value('string'), 'speaker_hash': Value('string')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
๐ ArSyra NLP Benchmark โ Arabic Dialect Evaluation Suite
The first Arabic NLP benchmark that spans dialects, not just MSA.
Dataset Summary
A structured evaluation dataset for benchmarking Arabic NLP models on dialect-aware tasks. Contains sentiment-annotated text, quality control labels with human judgments, and instruction-description pairs for testing model comprehension and generation capabilities.
Unlike most Arabic benchmarks that focus exclusively on MSA, ArSyra NLP Benchmark spans multiple dialect groups, enabling fair evaluation of how well models handle the Arabic that people actually speak.
| Statistic | Value |
|---|---|
| Total Records | 2,419 |
| Linguistic Categories | 3 |
| Countries Represented | 15 (Tunisia, Syria, Egypt, Saudi Arabia, Morocco, Algeria, Iraq, Jordan, Lebanon, UAE, Sudan, Yemen, Libya, Kuwait, Palestine) |
| Dialect Groups | 7 (Maghrebi, Levantine, Egyptian, Gulf, Iraqi, Sudanese, Other) |
| Average Quality Score | 78.2/100 |
| License | CC-BY-NC-SA-4.0 |
| Last Updated | 2026-02-21 |
How ArSyra Compares to Existing Arabic Datasets
| Dataset | Records | Dialects | Countries | Categories | Crowdsourced | MSAโDialect Pairs |
|---|---|---|---|---|---|---|
| ArSyra (arsyra-nlp-benchmark) | 2,419 | 7 | 15 | 3 | โ | โ |
| NADI (shared task) | ~20K | 4 | 21 | 1 | โ (Twitter) | โ |
| MADAR | ~12K | 6 | 25 | 1 | โ (paid) | โ |
| AOC (Arabic Online Commentary) | ~100K | โ | โ | 3 | โ (scraped) | โ |
| DART (Dialect Arabic) | ~25K | 5 | โ | 1 | โ (Twitter) | โ |
| ArSentD-LEV | ~4K | 1 | 4 | 1 | โ (Twitter) | โ |
ArSyra's advantages: Authentic native-speaker data (not scraped), multi-category structure, parallel MSAโdialect text, quality scored, and continuously growing.
Related ArSyra Datasets
Explore our other specialized Arabic dialect datasets:
- ๐ ArSyra Complete โ Multi-Dialect Arabic Dataset โ The most comprehensive crowdsourced Arabic dialect dataset available.
- ๐ค ArSyra Chatbot โ Conversational Arabic Training Data โ Purpose-built training data for Arabic conversational AI systems.
- ๐ ArSyra Translation โ Arabic DialectโMSA Parallel Corpus โ Parallel corpus bridging Modern Standard Arabic and regional dialects.
- ๐ช๐ฌ ArSyra Egyptian Arabic (Masri) Dataset โ The most widely understood Arabic dialect โ now as structured NLP data.
- ๐ธ๐พ ArSyra Levantine Arabic (Shami) Dataset โ Authentic Shami dialect data from Syria, Lebanon, Jordan, and Palestine.
- ๐ธ๐ฆ ArSyra Gulf Arabic (Khaliji) Dataset โ Gulf Arabic data from the Arabian Peninsula's rapidly growing digital population.
Browse all datasets: huggingface.co/ArSyra | arsyra.com/datasets.html
Supported Tasks
- Text Classification โ Train classifiers for dialect identification, sentiment analysis, and content categorization.
- Text Generation โ Fine-tune language models to generate authentic dialectal Arabic text.
Languages
Primary Language: Arabic (ar)
This dataset contains text in Modern Standard Arabic (MSA) and the following regional dialect groups: Maghrebi, Levantine, Egyptian, Gulf, Iraqi, Sudanese, Other. Country-level dialect codes: ar-TN, ar-SY, ar-EG, ar-SA, ar-MA, ar-DZ, ar-IQ, ar-JO, ar-LB, ar-AE, ar-SD, ar-YE, ar-LY, ar-KW, ar-PS.
Dataset Structure
Data Instances
Each record represents a single response from a verified native Arabic speaker to a structured linguistic prompt:
{
"question_code": "I-0015",
"category": "instructions",
"subcategory": "food",
"question_text": "ุงุดุฑุญ ููู ุชุฐุจุญ ุฎุฑูู ุฃู ุฏุฌุงุฌุฉ ุจููุฌุชู (ุฎุทูุฉ ุจุฎุทูุฉ)",
"answer_text": "ุชูุจู ุงููุจูุฉ ูุชุณู
ู ุจุณู
ุงููู ูุชุฐุจุญ",
"response_time_ms": 84609,
"quality_score": 100,
"country": "TN",
"answered_at": "2026-02-17T21:15:07.495Z",
"quality_grade": "A",
"speaker_hash": "anon-d2ViLTE3"
}
Data Fields
| Field | Type | Description |
|---|---|---|
text |
string | The Arabic text content โ may be in dialect, MSA, or a mix |
category |
string | Linguistic category (e.g., dialect, proverbs, sentiment, conversation_pairs) |
country |
string | ISO 3166-1 alpha-2 country code of the speaker (e.g., EG, SA, MA) |
dialect_group |
string | Broad dialect group: egyptian, levantine, gulf, maghrebi, iraqi, or sudanese |
quality_score |
int | Human-assigned quality rating from 0 to 100 |
msa_text |
string | Modern Standard Arabic equivalent (where available) |
context |
string | Additional context about the prompt or response |
speaker_hash |
string | Anonymized speaker identifier |
Data Splits
| Split | Examples |
|---|---|
| train | 2,419 |
Note: A single train split is provided. We recommend creating your own train/validation/test splits based on your use case. For dialect-fair evaluation, stratify by country or dialect_group.
Category Breakdown
| Category | Records | % of Total |
|---|---|---|
| instructions | 1,400 | 57.9% |
| sentiment | 719 | 29.7% |
| control | 300 | 12.4% |
Dataset Creation
Curation Rationale
Existing Arabic NLP benchmarks (ORCA, ALUE) focus almost exclusively on MSA text, creating a misleading picture of model capabilities. Arabic speakers primarily communicate in dialect, and models need to be evaluated accordingly. ArSyra NLP Benchmark fills this gap.
Source Data
Initial Data Collection and Normalization
Data was collected through the ArSyra platform (arsyra.com), a gamified crowdsourcing system where verified native Arabic speakers answer structured linguistic prompts about their dialect. The platform:
- Verifies speakers through phone number verification (region-specific) and language verification questions
- Presents structured prompts across multiple linguistic categories: dialect translations, conversation pairs, proverbs, slang, code-switching, sentiment expressions, instruction following, formality registers, and more
- Gamifies collection through points, leaderboards, and incentive systems to maintain engagement and data quality
- Automatically enriches responses with metadata: country, dialect group, category, and quality indicators
Who are the source language producers?
Native Arabic speakers from 15 countries across the Arab world (Tunisia, Syria, Egypt, Saudi Arabia, Morocco, Algeria, Iraq, Jordan, Lebanon, UAE, Sudan, Yemen, Libya, Kuwait, Palestine), participating voluntarily through the ArSyra platform. Speakers represent diverse demographics including age groups, education levels, and urban/rural backgrounds.
Annotations
Annotation Process
Each response receives:
- Automatic quality scoring based on response length, character set validation, and consistency checks
- Category labeling derived from the prompt type
- Dialect group classification based on the speaker's registered country
- Cross-speaker validation where multiple speakers from the same region answer the same prompts
Who are the annotators?
The primary "annotators" are the native speakers themselves, who provide dialectal data along with structured metadata. Quality scoring is automated. No external annotators are used for labeling.
Personal and Sensitive Information
- All speaker identifiers are anonymized โ original user IDs are replaced with non-reversible hashed identifiers
- No personally identifiable information (names, locations, phone numbers) is included
- Taboo and sensitive content (where present) is clearly labeled by category
- Speakers provided informed consent during registration for their anonymized data to be used for research
Considerations for Using the Data
Social Impact
This dataset contributes to Arabic NLP equity by providing training data for the dialects actually spoken by 400+ million people. Most existing Arabic NLP resources focus exclusively on Modern Standard Arabic, which is no one's native language. By bridging this gap, ArSyra helps ensure that Arabic-speaking populations benefit equally from advances in language technology.
Discussion of Biases
Known biases to consider:
- Platform access bias โ Contributors need internet access and a smartphone, potentially underrepresenting older, rural, or lower-income speakers
- Country representation โ Some countries may be overrepresented depending on recruitment channels
- Urban bias โ Online populations tend to be more urban, potentially underrepresenting rural dialect variants
- Literacy bias โ Written responses may differ from purely spoken dialect, as speakers may unconsciously shift toward MSA
- Self-selection bias โ Voluntary participants may not represent the full demographic spectrum
Other Known Limitations
- Written approximations โ Dialectal Arabic has limited standardized orthography; spelling varies across speakers
- Prompt influence โ Structured prompts may elicit more formal responses than spontaneous speech
- Quality variation โ Despite quality scoring, some responses may be lower quality
- Temporal snapshot โ Language evolves; slang and expressions may become dated over time
Additional Information
Use Cases
- Benchmarking Arabic LLMs on dialectal understanding
- Evaluating sentiment analysis across dialect groups
- Testing instruction-following in non-MSA Arabic
- Comparing model performance across Arabic varieties
Get the Full Dataset
This repository contains a preview sample of 50 records out of 2,419 total. Purchase the full dataset instantly at arsyra.com/datasets.html
Pricing
| Preview (this repo) | 50 sample records โ free to download and evaluate |
| Full Dataset | 2,419 records โ instant download after purchase |
| Academic License | From $29 โ for research and non-commercial use |
| Commercial License | From $99 โ for products, SaaS, and enterprise use |
๐ Buy Now โ
What you get with the full dataset:
- All 2,419 quality-filtered records
- Per-category JSONL splits for easy loading
- Instant download as ZIP after payment
- Regular updates as our community grows
- Priority support for integration questions
Questions? Email support@arsyra.com
Quick Start
from datasets import load_dataset
# Load the preview sample
dataset = load_dataset("ArSyra/arsyra-nlp-benchmark")
print(f"Preview: {len(dataset['train'])} sample records")
# Browse examples
for example in dataset["train"].select(range(5)):
print(f"{example['country']} ({example['dialect_group']}): {example['text'][:80]}...")
# For the full dataset (2,419 records), visit: https://arsyra.com/datasets.html
Licensing Information
The preview sample included in this repository is released under CC-BY-NC-SA-4.0.
The full dataset is available under flexible licensing terms:
| License | Use Case | Pricing |
|---|---|---|
| CC-BY-NC-SA-4.0 | Academic research, non-commercial use | From $29 |
| Commercial License | Enterprise, products, SaaS applications | From $99 |
Purchase a license โ or email support@arsyra.com for custom licensing.
Citation Information
If you use this dataset in your research, please cite:
@dataset{arsyra_arsyra_nlp_benchmark_2026,
title = {ArSyra NLP Benchmark โ Arabic Dialect Evaluation Suite},
author = {{ArSyra Team}},
year = {2026},
url = {https://huggingface.co/datasets/ArSyra/arsyra-nlp-benchmark},
publisher = {HuggingFace},
license = {CC-BY-NC-SA-4.0},
note = {Crowdsourced Arabic dialect dataset with 2,419 records from 15 countries}
}
Contributions
Thanks to the Arabic-speaking community who contributed their dialectal knowledge through the ArSyra platform. To contribute, visit arsyra.com.
Dataset card generated by the ArSyra Publish Pipeline. Last updated: 2026-02-21.
- Downloads last month
- 16