Datasets:
Amphion Arabic Dialect-Annotated Utterances
83,266 Whisper-transcribed Arabic utterances exported from the Amphion
corpus, each labeled with a country-level dialect code. Annotations come from
manual labeling of speakers and channels in the Amphion editor; this is a
small bootstrapped seed for the dialect-identification pipeline (see
newtts/docs/training/dialect-identification-pipeline.md).
Schema
Each line in data/{dialect}.jsonl.gz matches the Arabic Reddit dialect
corpus schema so the two can be concatenated for DID training:
{
"text": "...",
"source": "internal:saudi:speaker:789",
"dialect": "saudi",
"subreddit": null,
"kind": "utterance",
"score": null,
"char_len": 312
}
text— Whisper transcription, normalized (NFC, zero-width and tatweel stripped, shadda-first canonicalization, Arabic-Indic digits → Latin).source—internal:{dialect}:speaker:{speaker_id}when the dialect came from a manual speaker annotation,internal:{dialect}:channel:{channel_id}when it was inherited from the channel. Speaker dialect takes precedence.subreddit,score— always null; kept for column-compatibility with the Reddit shards.kind— always"utterance"for this corpus.
Per-dialect counts
| dialect | utterances | hours |
|---|---|---|
| ye | 83,266 | 455.8 |
Source split
- speaker (manual): 4,002
- channel (inherited): 79,264
Filters applied
words ≥ 4confidence ≥ 0.5dnsmos ≥ 0.0- only_completed:
False - excluded labels:
mixed, unsure
Notes
- This is a domain-matched DID seed (Whisper-transcribed Arabic); pairs with the larger out-of-domain Reddit/QADI sets used for tokenizer/MLM pretraining.
- Country-level macro-F1 is hard for short utterances (intra-Gulf near-indistinguishable). Aggregate to region for production gates.
- Downloads last month
- 14