Datasets:
metadata
language:
- ka
license: cc0-1.0
task_categories:
- text-to-speech
- audio-to-audio
tags:
- georgian
- tts
- common-voice
- speech-synthesis
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 6137099626.8
num_examples: 20300
- name: eval
num_bytes: 282215307.562
num_examples: 1001
- name: test
num_bytes: 31410778
num_examples: 120
download_size: 5452817424
dataset_size: 6450725712.362
Common Voice Georgian — Cleaned for TTS/STT
A high-quality subset of Mozilla Common Voice Georgian cleaned and filtered specifically for text-to-speech fine-tuning.
Dataset Summary
| Total samples | 21,421 |
| Total duration | 35.0 hours |
| Speakers | 12 |
| Sample rate | 24 kHz mono WAV |
| Language | Georgian (kat) |
| Source | Mozilla Common Voice 19.0 |
| License | CC-0 (public domain) |
Splits
| Split | Samples | Description |
|---|---|---|
train |
20,300 | Training data |
eval |
1,001 | Validation data |
test |
120 | Best quality speaker references (top NISQA scores) |
Quality Pipeline
The dataset was cleaned from ~71K raw Common Voice recordings through a 6-stage pipeline:
- Standardize — Resample to 24 kHz mono, normalize loudness to −23 LUFS, filter duration to [0.5s, 30s]
- Enhance — VoiceFixer audio restoration + Sox spectral noise subtraction
- NISQA Filter — NISQA MOS ≥ 3.0 (neural speech quality assessment)
- Duration Outlier — IQR-based character duration filter (removes misaligned/rushed/slow speech)
- Transcript Verify — Round-trip ASR (Meta Omnilingual 7B, 1.9% CER on Georgian) with CER ≤ 0.20 threshold
- Speaker Select — Keep speakers with ≥ 1800 seconds total audio
Fields
| Field | Type | Description |
|---|---|---|
id |
string | Common Voice clip ID |
audio |
Audio | 24 kHz mono WAV |
text |
string | Georgian transcript |
speaker_id |
string | Anonymized speaker ID (0–11) |
duration |
float | Duration in seconds |
Speaker Distribution
| Speaker | Samples | Duration |
|---|---|---|
| 0 | 5,683 | 8.8h |
| 1 | 1,164 | 1.8h |
| 2 | 2,970 | 5.3h |
| 3 | 3,240 | 5.3h |
| 4 | 2,595 | 3.6h |
| 5 | 1,556 | 2.8h |
| 6 | 1,131 | 1.8h |
| 7 | 1,130 | 2.1h |
| 8 | 470 | 0.8h |
| 9 | 544 | 1.0h |
| 10 | 607 | 1.0h |
| 11 | 331 | 0.7h |
Statistics
- Duration: min 2.4s, mean 5.9s, max 10.6s
Usage
from datasets import load_dataset
ds = load_dataset("NMikka/Common-Voice-Geo-Cleaned")
# Training
for sample in ds["train"]:
print(sample["text"], sample["duration"])
# Validation
for sample in ds["eval"]:
print(sample["text"])
# Best speaker references (for TTS inference/voice cloning)
for sample in ds["test"]:
print(sample["text"], sample["speaker_id"])
Citation
If you use this dataset, please cite Mozilla Common Voice:
@inproceedings{ardila2020common,
title={Common Voice: A Massively-Multilingual Speech Corpus},
author={Ardila, Rosana and others},
booktitle={LREC},
year={2020}
}