You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

To access the Moonscape Human Speech Atlas, please provide your institutional details and agree to the biometric privacy terms. NOTE: Requests utilizing non-institutional emails (@gmail, @hotmail, @yahoo, @outlook, etc.) will be automatically rejected for the Restricted Tier.

Log in or Sign Up to review the conditions and access this dataset content.

Moonscape Human Speech Atlas (HSA)

Moonscape Software — Human Speech Atlas

A curated corpus of pre-extracted acoustic feature matrices for multilingual prosody research. The HSA covers 90+ languages across 12 linguistic family trees, derived from Mozilla Common Voice (CV24.0) and Spontaneous Speech (SPS2.0) — both CC0.

The HSA is a companion project to the Synthetic Speech Atlas (SSA), which provides parallel feature-space representations of synthetic and bonafide speech for deepfake detection. Together they form a complete acoustic telemetry platform.

Audio files are not included. Each file is a Snappy-compressed Parquet dataset of 38 hand-crafted classical signal-processing features extracted via Parselmouth/Praat and Brouhaha, with full biometric protection applied at export.

Export version: HSA_v1_2026 Anonymization standard: moonscape_k5_fp16_v1 Watermark: HMAC-SHA256 seeded FP16-resolution noise per column


Repository Structure

This is a single gated repository. All 12 linguistic family trees are available as named subsets (configurations) within it. Each subset maps to a subdirectory of Parquet files under data/.

Human_Speech_Atlas/
├── README.md
├── LICENSE.md
└── data/
    ├── Indo-European/        ← ~148K cream clips, 50 tables
    ├── Niger-Congo/          ← ~144K cream clips, 34 tables
    ├── Austronesian/         ← ~46K cream clips, 17 tables
    ├── Mesoamerican/         ← ~28K cream clips, 13 tables
    ├── Americas-Other/       ← ~16K cream clips, 4 tables
    ├── Afro-Asiatic/         ← ~14K cream clips, 5 tables
    ├── Nilo-Saharan/         ← ~12K cream clips, 4 tables
    ├── Trans-New-Guinea/     ← ~11K cream clips, 3 tables
    ├── Eurasian-Minor/       ← ~8K cream clips, 8 tables
    ├── Turkic/               ← ~8K cream clips, 6 tables
    ├── Asian-Minor/          ← ~8K cream clips, 10 tables
    └── Isolates/             ← ~6K cream clips, 2 tables

Loading a Subset

from datasets import load_dataset

# Load a single family tree
df = load_dataset("moonscape-software/Human_Speech_Atlas", "Indo-European")

# Load multiple trees
indo = load_dataset("moonscape-software/Human_Speech_Atlas", "Indo-European")
niger = load_dataset("moonscape-software/Human_Speech_Atlas", "Niger-Congo")

# Load all trees (large — ~531K rows total)
all_trees = {
    tree: load_dataset("moonscape-software/Human_Speech_Atlas", tree)
    for tree in [
        "Indo-European", "Niger-Congo", "Austronesian", "Mesoamerican",
        "Americas-Other", "Afro-Asiatic", "Nilo-Saharan", "Trans-New-Guinea",
        "Eurasian-Minor", "Turkic", "Asian-Minor", "Isolates"
    ]
}

Family Tree Index

Subset (config_name) Languages Cream Clips Notable Languages
Indo-European 50 tables ~148K Hindi, Cornish, Gujari, Dhatki, Khowar, Kalasha, Manx, Lasi, Gawri
Niger-Congo 34 tables ~144K Hausa, Nawdm, Massa, Chokwe, Fang, Igbo, Cameroon Grassfields
Austronesian 17 tables ~46K Seediq, Batak Toba, Gotontalo, Cuyonon, Melanau
Mesoamerican 13 tables ~28K Tzeltal, 3x Mazatec, Mixtec, Totonac, Huichol
Americas-Other 4 tables ~16K Central Alaskan Yupik, Seri, Quechua
Afro-Asiatic 5 tables ~14K Hausa, Hebrew, Tashlhiyt Berber
Nilo-Saharan 4 tables ~12K Dinka Ruweng, Kenyan Luo
Trans-New-Guinea 3 tables ~11K Mauwake, Ukuriguma (Papuan deep-time)
Eurasian-Minor 8 tables ~8K Adyghe, Kabardian, Lak, Moksha, Votic
Turkic 6 tables ~8K Kazakh, Tuvan, Bashkir
Asian-Minor 10 tables ~8K Min Dong, Puxian Min, Keazi, Bodo
Isolates 2 tables ~6K Korean, Georgian

Clip counts are post-k-anonymisation (k=5). Rows where quasi-identifier groups contain fewer than 5 members are suppressed.


Why This Dataset Exists

Cross-linguistic prosody research is underserved by existing open corpora. The HSA enables researchers to:

  • Study prosodic typology across genetically unrelated language families in a single unified schema and a single gated repository
  • Train cross-lingual speech models without audio infrastructure
  • Benchmark acoustic features across tonal, stress-timed, and syllable-timed languages
  • Use the Isolates subset (Korean, Georgian) as a control group for studying convergent vs inherited prosodic features
  • Access rare and endangered language data — Votic (~5 remaining speakers), Tashlhiyt Berber (consonant-only syllabics), Trans-New Guinea Papuan languages

Privacy & Legal Framework

Why This Corpus Is Gated

These files contain acoustic features derived from recordings of real human speakers. Although audio is not included, acoustic feature vectors carry residual biometric information. The following controls are applied:

  1. Access restricted to verifiable institutional actors
  2. Non-institutional emails rejected at intake
  3. Commercial use requires a separate EULA with Moonscape Software
  4. All users agree to non-re-identification and watermark integrity obligations

Biometric Protection Measures — moonscape_k5_fp16_v1

  1. UID link cutfile_id, client_id, source_file, and transcript stripped entirely. No path back to source speaker or sentence identifiers.

  2. K-anonymity (k≥5) — Rows suppressed where the quasi-identifier group {gender, age_bucket, duration_bucket} contains fewer than 5 members.

  3. Duration bucketingduration_ms rounded to nearest 100ms.

  4. Precision reduction — All continuous acoustic variables rounded to 2dp.

  5. FP16 reinflation watermark — Each 2dp-rounded value is reinflated with deterministic FP16-resolution seeded noise:

    seed  = HMAC-SHA256(HSA_EXPORT_SECRET, col_name + "|HSA_v1_2026")
    noise ~ Uniform(-0.004, +0.004)  [seeded deterministically per column]
    value = float16(round(raw, 2) + noise)  [stored as float32]
    

    This destroys the backward-engineering path to raw biometric values while preserving statistical validity. Every row carries a unique verifiable provenance signature.

Source Data License

All source audio is Mozilla Common Voice CV24.0 (CC0-1.0) and Mozilla Spontaneous Speech SPS2.0 (CC0-1.0). No audio is redistributed. Feature extraction pipeline and methodology are copyright Moonscape Software.


Dataset Schema

All HSA parquet files share an identical 46-column canonical schema regardless of language, family tree, or source corpus. Column order is fixed.

Identity & Provenance

Column Type Description
clip_id string Anonymous sequential ID. Format: {lang_corpus_NNNNNN}
lang_code string ISO 639-3 language code (e.g. ko, ha, btv)
lang_name string Human-readable language name
corpus string Source corpus: cv24 or sps2
speech_type string scripted (CV24) or spontaneous (SPS2)
source_dataset string Full source name (e.g. Mozilla Common Voice CV24.0)
sentence_domain string Text domain: wikipedia | news | etc.

Demographics

Column Type Description
gender string male | female | other | unknown
age string Age bracket (e.g. 20-29) or unknown
accent string Self-reported accent/dialect label
dialect_tag string Normalised dialect code
sample_type string cream_t1 | cream_t2 | fill_t3 | fill_t4

Temporal

Column Type Description
duration_ms Int64 Clip duration bucketed to nearest 100ms
duration_s float32 duration_ms / 1000

Quality Gate (Brouhaha)

Column Type Description
tier int 1=PRISTINE 2=STUDIO 3=AMBIENT 4=TRASH
tier_label string PRISTINE | STUDIO | AMBIENT | TRASH
snr_median float32 Median signal-to-noise ratio (dB)
snr_mean float32 Mean SNR (dB)
c50_median float32 Median room clarity C50 (dB)
speech_ratio float32 Active speech fraction (0-1)

Acoustic Features (float32, FP16 watermarked)

Column Units Description
pitch_mean Hz Mean F0 (VAD-bounded, voiced frames only)
pitch_std Hz F0 standard deviation
pitch_range Hz 95th-5th percentile F0
pitch_velocity_max Hz/frame Max F0 rate-of-change
jitter_local % Cycle-to-cycle period variation (MP3 fidelity caveat)
shimmer_local % Cycle-to-cycle amplitude variation (MP3 fidelity caveat)
hnr_mean dB Harmonics-to-noise ratio
cpps Cepstral peak prominence smoothed
intensity_mean dB Mean intensity (normalised — see limitations)
intensity_max dB Peak intensity (normalised — see limitations)
intensity_range dB Dynamic range
intensity_velocity_max dB/frame Max intensity rate-of-change
spectral_centroid_mean Hz Mean spectral centroid
spectral_tilt dB/kHz Log-power spectrum slope
mfcc_delta_mean Mean first-order MFCC delta
zcr_mean Zero crossing rate
teo_mean Mean Teager Energy Operator
teo_std TEO standard deviation
f1_mean Hz Mean first formant
f2_mean Hz Mean second formant
f3_mean Hz Mean third formant
formant_dispersion Hz (F3-F1)/2 — vocal tract length proxy
npvi Normalised Pairwise Variability Index (0.0 pending MFA)
articulation_rate syl/s Syllable rate (0.0 pending MFA)
emotion_score 0-1 Composite vocal intensity score
syllable_count_approx int Vowel-count syllable proxy

Known Limitations

  • intensity_mean / intensity_max — Mozilla normalises source audio to -20 dBFS. These columns are dead vectors. Cross-speaker intensity comparison is invalid.
  • jitter_local / shimmer_local — MP3 codec degrades sub-ms glottal measurements. HNR and CPPS are more robust alternatives for this corpus.
  • npvi / articulation_rate — Return 0.0 pending Phase 2 MFA phoneme alignment.
  • Tonal languages — In Niger-Congo, Tai-Kadai, and Sino-Tibetan languages, pitch_mean/std/range measure lexical tone, not prosodic stress.

Quality Tiers

Tier Label SNR C50 Speech ratio
1 PRISTINE >= 35 dB >= 35 dB >= 0.30
2 STUDIO >= 25 dB >= 20 dB >= 0.30
3 AMBIENT >= 10 dB any >= 0.10
4 TRASH < 10 dB any < 0.10

Usage Examples

from datasets import load_dataset

# Load Indo-European family (default subset)
ds = load_dataset("moonscape-software/Human_Speech_Atlas", "Indo-European")
df = ds["train"].to_pandas()

# Cream clips only (T1+T2)
cream = df[df["sample_type"].str.startswith("cream", na=False)]

# Scripted (CV24) vs spontaneous (SPS2)
scripted    = df[df["corpus"] == "cv24"]
spontaneous = df[df["corpus"] == "sps2"]

# Cross-family pitch comparison (exclude tonal languages)
from datasets import load_dataset, concatenate_datasets
trees = ["Indo-European", "Austronesian", "Mesoamerican",
         "Eurasian-Minor", "Turkic", "Isolates"]
frames = [load_dataset("moonscape-software/Human_Speech_Atlas", t)["train"]
          for t in trees]
combined = concatenate_datasets(frames).to_pandas()
non_tonal = combined[~combined["lang_code"].isin(["th","cdo","cpx","brx"])]
print(non_tonal.groupby("lang_name")["pitch_mean"].mean().sort_values())

Extraction Pipeline

Pass 1 — Brouhaha (Lavechin et al., Interspeech 2022) — SNR, C50, VAD, tier.

Pass 2 — Classical Acoustic Features — Parselmouth/Praat + librosa, 38 features.

Pass 3 — MFA Phoneme Alignment (pending) — Montreal Forced Aligner on cream WAV sets. Will populate npvi and articulation_rate.


Licensing

Tier 1 — Moonscape Academic License (Non-Commercial)

Free for academic and non-commercial research on submission of institutional email. Full terms in LICENSE.md. Key obligations:

  • Cite Mozilla Common Voice and this dataset in any publications
  • Never attempt to re-identify human speakers from acoustic features
  • Never alter or remove forensic watermarks
  • Not redistribute as a standalone commercial product

Tier 2 / Tier 3 — Commercial License

Requires execution of a separate EULA with Moonscape Software prior to access.

Source License
Mozilla Common Voice CV24.0 CC0-1.0
Mozilla Spontaneous Speech SPS2.0 CC0-1.0
Feature extraction pipeline Copyright Moonscape Software

Citation

@dataset{kleingertner2026hsa,
  author    = {Kleingertner, Chris},
  title     = {Moonscape Human Speech Atlas (HSA)},
  year      = {2026},
  publisher = {Moonscape Software},
  note      = {Multilingual acoustic prosody feature corpus across 90+ languages
               in 12 linguistic family trees. Companion to the Synthetic Speech Atlas.},
  url       = {https://huggingface.co/datasets/moonscape-software/Human_Speech_Atlas}
}

@misc{mozilla2024commonvoice,
  title  = {Mozilla Common Voice},
  author = {Mozilla Foundation},
  year   = {2024},
  url    = {https://commonvoice.mozilla.org},
  note   = {CV24.0 release, CC0-1.0}
}

@inproceedings{lavechin2022brouhaha,
  title     = {Brouhaha: Multi-task Training for Noise Speech Detection and Assessment},
  author    = {Lavechin, Marvin and others},
  booktitle = {Interspeech},
  year      = {2022}
}

Access & Terms

This repository is gated. All users agree to:

  1. Cite Mozilla Common Voice and this dataset in any publications
  2. Never attempt to re-identify human speakers from acoustic features
  3. Never alter, truncate, or remove the forensic watermarks (seeded FP16 noise)
  4. Comply with upstream Mozilla Common Voice CC0 terms
  5. Not redistribute this feature matrix as a standalone product without a separate commercial licence from Moonscape Software

Tier 1 (Academic / Non-Commercial): Approved on institutional email. Full terms in LICENSE.md.

Tier 2/3 (Commercial): Requires EULA execution with Moonscape Software.


Moonscape Software — Human Speech Atlas Export version: HSA_v1_2026 | Watermark: HMAC-SHA256 seeded FP16 noise

Downloads last month
-