File size: 3,908 Bytes
ceced26
fd7d214
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29d3b28
4ecd3b0
 
 
 
29d3b28
 
 
 
 
4ecd3b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29d3b28
 
 
82738ca
 
 
 
 
 
29d3b28
 
5e5bfb8
 
 
 
ceced26
fd7d214
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
---
language:
- ar
- arz
- acm
- apc
- ary
- arb
language_bcp47:
- ar-EG
- ar-IQ
- ar-LB
- ar-MA
- ar-SA
license: mit
tags:
- arabic
- dialects
- nlp
- speech-to-text
- transcription
- text-classification
- linguistics
- corpus
- egyptian
- gulf
- levantine
- maghrebi
- iraqi
- cl100k_base
task_categories:
- text-generation
- text-classification
pretty_name: Arabic Dialect Corpus
size_categories:
- 100K<n<1M
configs:
- config_name: classified
  data_files:
  - split: train
    path: classified/train-*
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
- config_name: classified
  features:
  - name: text
    dtype: string
  - name: utterance_type
    dtype: string
  - name: dialect
    dtype: string
  - name: tokens
    dtype: int64
  - name: topic
    dtype: string
  - name: topic_confidence
    dtype: float64
  splits:
  - name: train
    num_bytes: 3096333
    num_examples: 10000
  download_size: 1531863
  dataset_size: 3096333
- config_name: default
  features:
  - name: text
    dtype: string
  - name: utterance_type
    dtype: string
  - name: dialect
    dtype: string
  - name: tokens
    dtype: int64
  splits:
  - name: train
    num_bytes: 61592976
    num_examples: 211593
  download_size: 26183021
  dataset_size: 61592976
---

# Arabic Dialect Corpus

A comprehensive collection of Arabic dialectal text, standardized for Natural Language Processing (NLP) model training, evaluation, and linguistic analysis. This corpus has been meticulously processed to ensure high-quality tokenization and consistent metadata.

## Dataset Statistics

| Metric | Value |
| :--- | :--- |
| **Total Records** | 127,180 |
| **Total Tokens** | 5,802,324 |
| **Average Tokens per Record** | 45.62 |
| **Dialect Categories** | 5 |

## Changelog

### Version 1.0 (January 2026)

This release establishes the baseline for the corpus with strict quality controls:

- **Token Count**: Validated 5.8M+ tokens using `cl100k_base` (GPT-4 standard).
- **Data Density**: Optimized average record length to ~45 tokens for efficient training.
- **Dialect Coverage**: Confirmed distribution across 5 distinct dialect categories.
- **Quality Assurance**: Zero empty records and standardized metadata schema.

## Dataset Structure

Each record in the dataset contains the following fields:

- `text` (string): The raw Arabic text content.
- `topic` (string): The semantic category or topic of the text.
- `utterance_type` (string): Classification of the utterance (e.g., statement, question).
- `dialect` (string): The regional dialect name: `Masri` (Egyptian), `Khaleeji` (Gulf), `Levantine`, `Maghrebi` (North African), or `Iraqi`.
- `tokens` (int): The precise token count calculated using `cl100k_base` encoding.

## Usage

### Loading the Dataset

The dataset is hosted on the Hugging Face Hub and can be loaded directly using the `datasets` library.

```python
from datasets import load_dataset

# Load the complete dataset
dataset = load_dataset("dataflare/arabic-dialect-corpus")
```

## Detailed Methodology

### Collection and Processing
The data was aggregated from diverse sources including transcribed media and public archives. The processing pipeline involved:
1.  **Normalization**: Text normalization to remove noise while preserving dialectal features.
2.  **Segmentation**: Splitting long passages into training-ready chunks.
3.  **Token Counting**: Rigorous token counting using `tiktoken` to assist in curriculum training and length bucketing.

## Citation and License

This dataset is released under the **MIT License**.

If you rely on this corpus for your research or application, please cite it using the following BibTeX entry:

```bibtex
@dataset{arabic_dialect_corpus,
  title={Arabic Dialect Corpus},
  author={Dataflare},
  year={2026},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/dataflare/arabic-dialect-corpus}
}
```