Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
question_code: string
category: string
subcategory: string
question_text: string
answer_text: string
response_time_ms: int64
quality_score: int64
country: string
answered_at: string
quality_grade: string
speaker_hash: string
text: null
dialect_group: null
msa_text: null
context: null
to
{'text': Value('string'), 'category': Value('string'), 'country': Value('string'), 'dialect_group': Value('string'), 'quality_score': Value('int32'), 'msa_text': Value('string'), 'context': Value('string'), 'speaker_hash': Value('string')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2092, in _iter_arrow
                  pa_table = cast_table_to_features(pa_table, self.features)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              question_code: string
              category: string
              subcategory: string
              question_text: string
              answer_text: string
              response_time_ms: int64
              quality_score: int64
              country: string
              answered_at: string
              quality_grade: string
              speaker_hash: string
              text: null
              dialect_group: null
              msa_text: null
              context: null
              to
              {'text': Value('string'), 'category': Value('string'), 'country': Value('string'), 'dialect_group': Value('string'), 'quality_score': Value('int32'), 'msa_text': Value('string'), 'context': Value('string'), 'speaker_hash': Value('string')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: The task_categories "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
YAML Metadata Warning: The task_ids "machine-translation" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, text2text-generation, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation
YAML Metadata Warning: The task_ids "paraphrase-generation" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, text2text-generation, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation

🌐 ArSyra Translation — Arabic Dialect–MSA Parallel Corpus

Parallel corpus bridging Modern Standard Arabic and regional dialects.



Dataset Summary

A parallel corpus mapping between Modern Standard Arabic (MSA) and regional dialects, supplemented with paraphrase pairs and formality-shifted equivalents. This dataset enables training of dialect-aware machine translation models, dialect identification systems, and style-transfer applications.

Entries span multiple dialect groups — Egyptian, Levantine, Gulf, Maghrebi, and Iraqi — each with corresponding MSA equivalents provided by native speakers. An essential resource for narrowing the MSA-dialect divide that limits most Arabic NLP tools today.

Statistic Value
Total Records 8,912
Linguistic Categories 4
Countries Represented 16 (Tunisia, Syria, EU, Egypt, Saudi Arabia, Morocco, Algeria, Iraq, Jordan, Lebanon, UAE, Sudan, Yemen, Libya, Kuwait, Palestine)
Dialect Groups 7 (Maghrebi, Levantine, Egyptian, Gulf, Iraqi, Sudanese, Other)
Average Quality Score 79.8/100
License CC-BY-NC-SA-4.0
Last Updated 2026-02-21

How ArSyra Compares to Existing Arabic Datasets

Dataset Records Dialects Countries Categories Crowdsourced MSA↔Dialect Pairs
ArSyra (arsyra-translation) 8,912 7 16 4
NADI (shared task) ~20K 4 21 1 ❌ (Twitter)
MADAR ~12K 6 25 1 ✅ (paid)
AOC (Arabic Online Commentary) ~100K 3 ❌ (scraped)
DART (Dialect Arabic) ~25K 5 1 ❌ (Twitter)
ArSentD-LEV ~4K 1 4 1 ❌ (Twitter)

ArSyra's advantages: Authentic native-speaker data (not scraped), multi-category structure, parallel MSA↔dialect text, quality scored, and continuously growing.

Related ArSyra Datasets

Explore our other specialized Arabic dialect datasets:

Browse all datasets: huggingface.co/ArSyra | arsyra.com/datasets.html

Supported Tasks

  • Machine Translation — Build translation systems between MSA and regional Arabic dialects.
  • Text Generation — Fine-tune language models to generate authentic dialectal Arabic text.
  • Text-to-Text Generation — Paraphrasing, style transfer, and formality control tasks.

Languages

Primary Language: Arabic (ar)

This dataset contains text in Modern Standard Arabic (MSA) and the following regional dialect groups: Maghrebi, Levantine, Egyptian, Gulf, Iraqi, Sudanese, Other. Country-level dialect codes: ar-TN, ar-SY, ar-EU, ar-EG, ar-SA, ar-MA, ar-DZ, ar-IQ, ar-JO, ar-LB, ar-AE, ar-SD, ar-YE, ar-LY, ar-KW, ar-PS.


Dataset Structure

Data Instances

Each record represents a single response from a verified native Arabic speaker to a structured linguistic prompt:

{
  "question_code": "V-0100",
  "category": "vocabulary",
  "subcategory": "food",
  "question_text": "نعناع",
  "answer_text": "نعناع",
  "response_time_ms": 25062,
  "quality_score": 83,
  "country": "TN",
  "answered_at": "2026-02-17T20:57:29.235Z",
  "quality_grade": "B",
  "speaker_hash": "anon-d2ViLTE3"
}

Data Fields

Field Type Description
text string The Arabic text content — may be in dialect, MSA, or a mix
category string Linguistic category (e.g., dialect, proverbs, sentiment, conversation_pairs)
country string ISO 3166-1 alpha-2 country code of the speaker (e.g., EG, SA, MA)
dialect_group string Broad dialect group: egyptian, levantine, gulf, maghrebi, iraqi, or sudanese
quality_score int Human-assigned quality rating from 0 to 100
msa_text string Modern Standard Arabic equivalent (where available)
context string Additional context about the prompt or response
speaker_hash string Anonymized speaker identifier

Data Splits

Split Examples
train 8,912

Note: A single train split is provided. We recommend creating your own train/validation/test splits based on your use case. For dialect-fair evaluation, stratify by country or dialect_group.

Category Breakdown

Category Records % of Total
dialect 4,073 45.7%
vocabulary 2,404 27.0%
formality_transfer 1,439 16.1%
paraphrase 996 11.2%

Dataset Creation

Curation Rationale

Arabic speakers constantly code-switch between MSA and their regional dialect, yet most MT systems treat Arabic as a single monolithic language. ArSyra Translation provides the parallel data needed to build systems that understand and translate between the Arabic varieties people actually use.

Source Data

Initial Data Collection and Normalization

Data was collected through the ArSyra platform (arsyra.com), a gamified crowdsourcing system where verified native Arabic speakers answer structured linguistic prompts about their dialect. The platform:

  1. Verifies speakers through phone number verification (region-specific) and language verification questions
  2. Presents structured prompts across multiple linguistic categories: dialect translations, conversation pairs, proverbs, slang, code-switching, sentiment expressions, instruction following, formality registers, and more
  3. Gamifies collection through points, leaderboards, and incentive systems to maintain engagement and data quality
  4. Automatically enriches responses with metadata: country, dialect group, category, and quality indicators

Who are the source language producers?

Native Arabic speakers from 16 countries across the Arab world (Tunisia, Syria, EU, Egypt, Saudi Arabia, Morocco, Algeria, Iraq, Jordan, Lebanon, UAE, Sudan, Yemen, Libya, Kuwait, Palestine), participating voluntarily through the ArSyra platform. Speakers represent diverse demographics including age groups, education levels, and urban/rural backgrounds.

Annotations

Annotation Process

Each response receives:

  • Automatic quality scoring based on response length, character set validation, and consistency checks
  • Category labeling derived from the prompt type
  • Dialect group classification based on the speaker's registered country
  • Cross-speaker validation where multiple speakers from the same region answer the same prompts

Who are the annotators?

The primary "annotators" are the native speakers themselves, who provide dialectal data along with structured metadata. Quality scoring is automated. No external annotators are used for labeling.

Personal and Sensitive Information

  • All speaker identifiers are anonymized — original user IDs are replaced with non-reversible hashed identifiers
  • No personally identifiable information (names, locations, phone numbers) is included
  • Taboo and sensitive content (where present) is clearly labeled by category
  • Speakers provided informed consent during registration for their anonymized data to be used for research

Considerations for Using the Data

Social Impact

This dataset contributes to Arabic NLP equity by providing training data for the dialects actually spoken by 400+ million people. Most existing Arabic NLP resources focus exclusively on Modern Standard Arabic, which is no one's native language. By bridging this gap, ArSyra helps ensure that Arabic-speaking populations benefit equally from advances in language technology.

Discussion of Biases

Known biases to consider:

  1. Platform access bias — Contributors need internet access and a smartphone, potentially underrepresenting older, rural, or lower-income speakers
  2. Country representation — Some countries may be overrepresented depending on recruitment channels
  3. Urban bias — Online populations tend to be more urban, potentially underrepresenting rural dialect variants
  4. Literacy bias — Written responses may differ from purely spoken dialect, as speakers may unconsciously shift toward MSA
  5. Self-selection bias — Voluntary participants may not represent the full demographic spectrum

Other Known Limitations

  • Written approximations — Dialectal Arabic has limited standardized orthography; spelling varies across speakers
  • Prompt influence — Structured prompts may elicit more formal responses than spontaneous speech
  • Quality variation — Despite quality scoring, some responses may be lower quality
  • Temporal snapshot — Language evolves; slang and expressions may become dated over time

Additional Information

Use Cases

  • Training MSA ↔ dialect machine translation models
  • Building dialect identification and classification systems
  • Developing Arabic style transfer (formal ↔ informal)
  • Augmenting Arabic NMT systems with dialectal training pairs

Get the Full Dataset

This repository contains a preview sample of 50 records out of 8,912 total. Purchase the full dataset instantly at arsyra.com/datasets.html

Pricing

Preview (this repo) 50 sample records — free to download and evaluate
Full Dataset 8,912 records — instant download after purchase
Academic License From $29 — for research and non-commercial use
Commercial License From $99 — for products, SaaS, and enterprise use

🛒 Buy Now →

What you get with the full dataset:

  • All 8,912 quality-filtered records
  • Per-category JSONL splits for easy loading
  • Instant download as ZIP after payment
  • Regular updates as our community grows
  • Priority support for integration questions

Questions? Email support@arsyra.com


Quick Start

from datasets import load_dataset

# Load the preview sample
dataset = load_dataset("ArSyra/arsyra-translation")
print(f"Preview: {len(dataset['train'])} sample records")

# Browse examples
for example in dataset["train"].select(range(5)):
    print(f"{example['country']} ({example['dialect_group']}): {example['text'][:80]}...")

# For the full dataset (8,912 records), visit: https://arsyra.com/datasets.html

Licensing Information

The preview sample included in this repository is released under CC-BY-NC-SA-4.0.

The full dataset is available under flexible licensing terms:

License Use Case Pricing
CC-BY-NC-SA-4.0 Academic research, non-commercial use From $29
Commercial License Enterprise, products, SaaS applications From $99

Purchase a license → or email support@arsyra.com for custom licensing.

Citation Information

If you use this dataset in your research, please cite:

@dataset{arsyra_arsyra_translation_2026,
  title     = {ArSyra Translation — Arabic Dialect–MSA Parallel Corpus},
  author    = {{ArSyra Team}},
  year      = {2026},
  url       = {https://huggingface.co/datasets/ArSyra/arsyra-translation},
  publisher = {HuggingFace},
  license   = {CC-BY-NC-SA-4.0},
  note      = {Crowdsourced Arabic dialect dataset with 8,912 records from 16 countries}
}

Contributions

Thanks to the Arabic-speaking community who contributed their dialectal knowledge through the ArSyra platform. To contribute, visit arsyra.com.


Dataset card generated by the ArSyra Publish Pipeline. Last updated: 2026-02-21.

Downloads last month
20