arsyra-content-mod / README.md
ArSyra010's picture
Update arsyra-content-mod — 2026-02-23
eccdd23 verified
metadata
annotations_creators:
  - expert-generated
language_creators:
  - expert-generated
language:
  - ar
license: other
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - text-classification
  - text-generation
task_ids:
  - sentiment-classification
  - multi-class-classification
  - hate-speech-detection
pretty_name: ArSyra Content Moderation  Arabic Safety Dataset
tags:
  - arabic
  - content-moderation
  - safety
  - toxicity-detection
  - taboo
  - sentiment-analysis
  - moderation
  - trust-and-safety
  - arabic-safety
  - hate-speech
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/all.jsonl
dataset_info:
  features:
    - name: text
      dtype: string
    - name: category
      dtype: string
    - name: country
      dtype: string
    - name: dialect_group
      dtype: string
    - name: quality_score
      dtype: int32
    - name: msa_text
      dtype: string
    - name: context
      dtype: string
    - name: speaker_hash
      dtype: string
  splits:
    - name: train
      num_examples: 10078
extra_gated_prompt: >-
  ## Access to ArSyra Arabic Dialect Datasets

  This dataset contains quality-scored Arabic dialect data collected from
  verified native speakers. **This is a preview sample** (50 records). The full
  dataset is available for purchase at
  [arsyra.com/datasets](https://arsyra.com/datasets.html).

  By requesting access you agree to: - Use the data only for research or
  evaluation purposes - Not redistribute the data - Contact support@arsyra.com
  for commercial licensing
extra_gated_fields:
  Full Name: text
  Organization: text
  Use Case:
    type: select
    options:
      - Research / Academic
      - Commercial / Business
      - Personal / Learning
      - Other
  I agree to the terms above:
    type: checkbox
extra_gated_button_content: Request Access

🛡️ ArSyra Content Moderation — Arabic Safety Dataset

Moving Arabic content moderation beyond MSA-only approaches.


Table of Contents


Dataset Description

Dataset Summary

Training data for Arabic content moderation and online safety systems. Combines taboo content with cultural context labels, sentiment-annotated text for toxicity detection, quality control annotations for reliability scoring, and informal slang that often triggers false positives in rule-based filters.

Covers the nuanced spectrum of Arabic content that automated moderation systems frequently misclassify — including dialect-specific expressions, culturally contextual language, and code-switched content. Designed for teams building Arabic content moderation at scale.

Statistic Value
Total Records 10,078
Linguistic Categories 4
Countries Represented 16 (Tunisia, Syria, Egypt, EU, Saudi Arabia, Morocco, Iraq, Sudan, Algeria, Jordan, Lebanon, UAE, Yemen, Libya, Kuwait, Palestine)
Dialect Groups 7 (Maghrebi, Levantine, Egyptian, Gulf, Iraqi, Sudanese, Other)
Average Quality Score 90.4/100
License Commercial
Last Updated 2026-02-23

How ArSyra Compares to Existing Arabic Datasets

Dataset Records Dialects Countries Categories Verified MSA↔Dialect Pairs
ArSyra (arsyra-content-mod) 10,078 7 16 4
NADI (shared task) ~20K 4 21 1 ❌ (Twitter)
MADAR ~12K 6 25 1 ✅ (paid)
AOC (Arabic Online Commentary) ~100K 3 ❌ (scraped)
DART (Dialect Arabic) ~25K 5 1 ❌ (Twitter)
ArSentD-LEV ~4K 1 4 1 ❌ (Twitter)

ArSyra's advantages: Authentic native-speaker data (not scraped), multi-category structure, parallel MSA↔dialect text, quality scored, and continuously growing.

Related ArSyra Datasets

Explore our other specialized Arabic dialect datasets:

Browse all datasets: huggingface.co/ArSyra | arsyra.com/datasets.html

Supported Tasks

  • Text Classification — Train classifiers for dialect identification, sentiment analysis, and content categorization.
  • Text Generation — Fine-tune language models to generate authentic dialectal Arabic text.

Languages

Primary Language: Arabic (ar)

This dataset contains text in Modern Standard Arabic (MSA) and the following regional dialect groups: Maghrebi, Levantine, Egyptian, Gulf, Iraqi, Sudanese, Other. Country-level dialect codes: ar-TN, ar-SY, ar-EG, ar-EU, ar-SA, ar-MA, ar-IQ, ar-SD, ar-DZ, ar-JO, ar-LB, ar-AE, ar-YE, ar-LY, ar-KW, ar-PS.


Dataset Structure

Data Instances

Each record represents a single response from a verified native Arabic speaker to a structured linguistic prompt:

{
  "question_code": "TB-0035",
  "category": "taboo",
  "subcategory": "sexual",
  "question_text": "كيف تقول \"خيانة زوجية\" بلهجتك؟",
  "answer_text": "خيانه زوجية",
  "response_time_ms": 31788,
  "quality_score": 100,
  "country": "TN",
  "answered_at": "2026-02-17T21:02:40.042Z",
  "quality_grade": "A",
  "speaker_hash": "anon-d2ViLTE3"
}

Data Fields

Field Type Description
text string The Arabic text content — may be in dialect, MSA, or a mix
category string Linguistic category (e.g., dialect, proverbs, sentiment, conversation_pairs)
country string ISO 3166-1 alpha-2 country code of the speaker (e.g., EG, SA, MA)
dialect_group string Broad dialect group: egyptian, levantine, gulf, maghrebi, iraqi, or sudanese
quality_score int Human-assigned quality rating from 0 to 100
msa_text string Modern Standard Arabic equivalent (where available)
context string Additional context about the prompt or response
speaker_hash string Anonymized speaker identifier

Data Splits

Split Examples
train 10,078

Note: A single train split is provided. We recommend creating your own train/validation/test splits based on your use case. For dialect-fair evaluation, stratify by country or dialect_group.

Category Breakdown

Category Records % of Total
sentiment 5,128 50.9%
slang 2,264 22.5%
taboo 1,719 17.1%
code_switching 967 9.6%

Dataset Creation

Curation Rationale

Arabic content moderation is plagued by high false-positive rates because most systems are trained on MSA and misinterpret dialectal expressions. Harmless Gulf slang gets flagged as offensive, while genuinely toxic Maghrebi content slips through. This dataset provides the dialectally-diverse training data needed to build accurate Arabic moderation at scale.

Source Data

Initial Data Collection and Normalization

Data was collected through the ArSyra platform (arsyra.com), a multi-dialect Arabic data collection system where verified native Arabic speakers respond to structured linguistic prompts about their dialect. The platform:

  1. Verifies speakers through phone number verification (region-specific) and language verification questions
  2. Presents structured prompts across multiple linguistic categories: dialect translations, conversation pairs, proverbs, slang, code-switching, sentiment expressions, instruction following, formality registers, and more
  3. Quality-scores all data through multi-layer validation to ensure linguistic accuracy and dialect authenticity
  4. Automatically enriches responses with metadata: country, dialect group, category, and quality indicators

Who are the source language producers?

Native Arabic speakers from 16 countries across the Arab world (Tunisia, Syria, Egypt, EU, Saudi Arabia, Morocco, Iraq, Sudan, Algeria, Jordan, Lebanon, UAE, Yemen, Libya, Kuwait, Palestine), participating voluntarily through the ArSyra platform. Speakers represent diverse demographics including age groups, education levels, and urban/rural backgrounds.

Annotations

Annotation Process

Each response receives:

  • Automatic quality scoring based on response length, character set validation, and consistency checks
  • Category labeling derived from the prompt type
  • Dialect group classification based on the speaker's registered country
  • Cross-speaker validation where multiple speakers from the same region answer the same prompts

Who are the annotators?

The primary "annotators" are the native speakers themselves, who provide dialectal data along with structured metadata. Quality scoring is automated. No external annotators are used for labeling.

Personal and Sensitive Information

  • All speaker identifiers are anonymized — original user IDs are replaced with non-reversible hashed identifiers
  • No personally identifiable information (names, locations, phone numbers) is included
  • Taboo and sensitive content (where present) is clearly labeled by category
  • Speakers provided informed consent during registration for their anonymized data to be used for research

Considerations for Using the Data

Social Impact

This dataset contributes to Arabic NLP equity by providing training data for the dialects actually spoken by 400+ million people. Most existing Arabic NLP resources focus exclusively on Modern Standard Arabic, which is no one's native language. By bridging this gap, ArSyra helps ensure that Arabic-speaking populations benefit equally from advances in language technology.

Discussion of Biases

Known biases to consider:

  1. Platform access bias — Contributors need internet access and a smartphone, potentially underrepresenting older, rural, or lower-income speakers
  2. Country representation — Some countries may be overrepresented depending on recruitment channels
  3. Urban bias — Online populations tend to be more urban, potentially underrepresenting rural dialect variants
  4. Literacy bias — Written responses may differ from purely spoken dialect, as speakers may unconsciously shift toward MSA
  5. Self-selection bias — Voluntary participants may not represent the full demographic spectrum

Other Known Limitations

  • Written approximations — Dialectal Arabic has limited standardized orthography; spelling varies across speakers
  • Prompt influence — Structured prompts may elicit more formal responses than spontaneous speech
  • Quality variation — Despite quality scoring, some responses may be lower quality
  • Temporal snapshot — Language evolves; slang and expressions may become dated over time

Additional Information

Use Cases

  • Training Arabic content moderation classifiers
  • Building toxicity detection for Arabic social media
  • Reducing false positives in dialect-heavy content
  • Trust & safety systems for Arabic-speaking markets

Get the Full Dataset

This repository contains a preview sample of 50 records out of 10,078 total. Purchase the full dataset instantly at arsyra.com/datasets.html

Pricing

Preview (this repo) 50 sample records — free to download and evaluate
Full Dataset 10,078 records — instant download after purchase
Academic License From $29 — for research and non-commercial use
Commercial License From $99 — for products, SaaS, and enterprise use

🛒 Buy Now →

What you get with the full dataset:

  • All 10,078 quality-filtered records
  • Per-category JSONL splits for easy loading
  • Instant download as ZIP after payment
  • Regular updates as our community grows
  • Priority support for integration questions

Questions? Email support@arsyra.com


Quick Start

from datasets import load_dataset

# Load the preview sample
dataset = load_dataset("ArSyra/arsyra-content-mod")
print(f"Preview: {len(dataset['train'])} sample records")

# Browse examples
for example in dataset["train"].select(range(5)):
    print(f"{example['country']} ({example['dialect_group']}): {example['text'][:80]}...")

# For the full dataset (10,078 records), visit: https://arsyra.com/datasets.html

Licensing Information

This dataset is available under a Commercial license. Purchase access → or email support@arsyra.com for custom licensing.

Citation Information

If you use this dataset in your research, please cite:

@dataset{arsyra_arsyra_content_mod_2026,
  title     = {ArSyra Content Moderation — Arabic Safety Dataset},
  author    = {{ArSyra Team}},
  year      = {2026},
  url       = {https://huggingface.co/datasets/ArSyra/arsyra-content-mod},
  publisher = {HuggingFace},
  license   = {Commercial},
  note      = {Multi-dialect Arabic dataset with 10,078 records from 16 countries}
}

Contributions

Thanks to the Arabic-speaking community who contributed their dialectal knowledge through the ArSyra platform. To contribute, visit arsyra.com.


Dataset card generated by the ArSyra Publish Pipeline. Last updated: 2026-02-23.