Multi-Accent English Speech Corpus (Augmented & Speaker-Disjoint)
This dataset is a curated and augmented multi-accent English speech corpus designed for speech recognition, accent classification, and representation learning.
It consolidates multiple open-source accent corpora, converts all audio to a unified format, applies targeted data augmentation, and exports in a tidy, Hugging Face–ready structure.
✨ Key Features
- Accents covered (12 total):
american_english, british_english, indian_english, canadian_english, australian_english, scottish_english, irish_english, new_zealand_english, northernirish, african_english, welsh_english, south_african_english - Speaker-disjoint splits: each speaker is assigned to exactly one split (train/validation/test).
- Augmentation strategy:
- < 2.6k samples → expanded to 5k via augmentation
- 2.6k–10k samples → expanded to 10k via augmentation
10k samples → 50% replaced in place with augmented versions
- Aug methods: time-stretch, pitch-shift, background noise injection, reverb/EQ
- Audio format (standardized):
.wav, 16-bit PCM, 16 kHz sample rate, mono - Metadata-rich:
uid,text,accent,speaker_id,dataset,is_augmented,source_uid,aug_label,duration_s
📂 Dataset Structure
hf_export/
├── data/
│ ├── train/
│ │ ├── 0000/uid.wav
│ │ └── ...
│ ├── validation/
│ └── test/
├── metadata/
│ ├── train.parquet
│ ├── validation.parquet
│ └── test.parquet
├── SPLIT_REPORT.md
├── QA_REPORT.md
├── splits_by_speaker.csv
└── README.md
🗂 Metadata Schema
| Column | Type | Description |
|---|---|---|
uid |
string | Unique ID per sample |
path |
string | Relative path to the .wav file |
text |
string | Transcript |
accent |
string | One of 17 whitelisted accents |
speaker_id |
string | Unique speaker ID (dataset-prefixed) |
dataset |
string | Source dataset (cv, vctk, accentdb, etc.) |
is_augmented |
bool | True if augmented |
source_uid |
string | UID of the original sample (for augmented rows) |
aug_label |
string | Applied augmentation method (e.g., pitch:+1.2) |
duration_s |
float32 | Duration in seconds |
Note: for accents with over 10k samples, is_augmented is false even though there's augmentation for half of their size. Use random sampling for these accents.
📊 Splitting Strategy
- Ratios: 78% train, 11% validation, 11% test
- Speaker-disjoint: a speaker appears in only one split
- Accent-aware: splits preserve global accent proportions
- Dataset balance: allocator prefers splits underrepresented for a dataset
- Augmentation inheritance: augmented samples inherit the split of their source speaker
✅ Preflight Validations
Checks applied before release:
- Unique
uidvalues - All files exist and are readable
- Format: 16kHz / mono / PCM_16
accent∈ whitelist (17)- Transcripts non-empty
- Augmented samples link back to valid originals
- Duration bounds: 0.2s–30s (flagged outliers)
- Speaker-disjointness across splits
- Accent & dataset distributions close to global ratios
- Duplicate detection (duration+text fingerprint)
Reports:
- SPLIT_REPORT.md: speaker allocation, accent/dataset balance
- QA_REPORT.md: split sizes, duration anomalies, duplicate candidates
🔎 Example Usage
from datasets import load_dataset
ds = load_dataset("cagatayn/multi_accent_speech", split="train")
print(ds[0])
# {
# 'uid': 'abc123',
# 'path': 'data/train/0000/abc123.wav',
# 'text': "it's the philosophy that guarantees everyone's freedom",
# 'accent': 'american_english',
# 'speaker_id': 'cv_sample-000031',
# 'dataset': 'cv',
# 'is_augmented': False,
# 'source_uid': '',
# 'aug_label': '',
# 'duration_s': 3.21,
# 'audio': {'path': 'data/train/0000/abc123.wav', 'array': ..., 'sampling_rate': 16000}
# }
📜 License
- Audio/data sourced from Common Voice, VCTK, L2 Arctic, Speech Accent Archive, AccentDB.
- Augmented versions generated as derivative works.
- Redistribution follows the most restrictive upstream license. Please review original dataset terms before commercial use.