arvinsingh's picture
Add 3d reconstructed video preview
c9a3857 verified
metadata
license: cc-by-nc-4.0
task_categories:
  - automatic-speech-recognition
  - text-to-speech
  - text-to-audio
language:
  - cy
tags:
  - speech
  - welsh
  - cymraeg
  - 3d-face
  - facial-landmarks
  - multimodal
  - fluency
  - pronunciation
  - 4d-dataset
size_categories:
  - 100K<n<1M
pretty_name: CymruFluency Welsh Speech Dataset

Welsh Speech Dataset

A multimodal dataset of 33 speakers producing 10 Welsh phrases, captured using 3DMD technology with audio and dense facial landmark annotations.

Dataset Overview

  • Speakers: 33 participants
  • Phrases: 10 Welsh phrases per speaker
  • Sequences: ~330 (33 speakers x 10 phrases)
  • Modalities:
    • Audio recordings (.wav)
    • 3D facial reconstructions (.obj meshes + texture maps)
    • 68-point facial landmarks (ibug68 template)
  • Fluency Scores: Each phrase rated 0-5 (5 = perfect, 0 = many errors)

Preview

Subject uttering Welsh phrase “Gwybodaeth angenrheidiol” (Tr. EN: Necessary information; IPA: /ˈɡʊɨ̯bɔðaɪθ aŋɛnˈhreɪ̯djɔl/)

Repository Structure

This dataset is split into 4 repositories for convenience:

  1. welsh-speech-dataset (this repo) - Main hub with sequence-level metadata
  2. welsh-speech-audio - Audio recordings only
  3. welsh-speech-3d-meshes - 3D facial meshes (zipped per sequence)
  4. welsh-speech-landmarks - Facial landmarks (frame-level Parquet)

Metadata

The metadata.csv and metadata.parquet files contain sequence-level data (one row per speaker-phrase):

Column Description
speaker_id Speaker identifier (1-33)
phrase_id Phrase identifier (1-10)
audio_path Path to audio file
mesh_zip_path Path to 3D mesh zip file
fluency_score Pronunciation quality score (0-5)
welsh_text Welsh phrase text
english_translation English translation
num_frames Number of frames in the sequence
has_3d Boolean indicating 3D data availability
has_landmark Boolean indicating landmark availability

Note: Landmark data is stored in landmarks.parquet in the landmarks repository at frame-level. Join using speaker_id and phrase_id to combine with this sequence-level metadata.

Welsh Phrases

ID Welsh Text English Translation
1 Eisteddfod yr Urdd Welsh Youth Music Competition
2 Prynhawn da bawb Good afternoon everyone
3 Dyn busnes yw e It's a businessman
4 Papur a phensil Paper and pencil
5 Ardderchog Excellent / Superb
6 Llwyddiant ysgubol Great success
7 Yng nghanol y dref In the town center
8 Dwy neuadd gymunedol Two community halls
9 Llunio rhestr fer Shortlisted
10 Gwobodaeth angenrheidiol Necessary information

Usage

Load Metadata

import pandas as pd

# load sequence-level metadata
metadata = pd.read_parquet("metadata.parquet")

# filter by fluency score
high_quality = metadata[metadata['fluency_score'] >= 4]

# get info for specific speaker/phrase
seq = metadata[(metadata['speaker_id'] == 1) & (metadata['phrase_id'] == 1)].iloc[0]
print(f"Frames: {seq['num_frames']}, Fluency: {seq['fluency_score']}")

Access Specific Modalities

Download only what you need:

from huggingface_hub import hf_hub_download
import zipfile

# download audio
audio_file = hf_hub_download(
    repo_id="arvinsingh/welsh-speech-audio",
    filename="audio/speaker_01_phrase_01.wav",
    repo_type="dataset"
)

# download 3D mesh zip for a sequence
mesh_zip = hf_hub_download(
    repo_id="arvinsingh/welsh-speech-3d-meshes",
    filename="meshes/speaker_01_phrase_01.zip",
    repo_type="dataset"
)

# extract meshes
with zipfile.ZipFile(mesh_zip, 'r') as zf:
    zf.extractall("speaker_01_phrase_01")
    # Contains: 001.obj, 001.png, 002.obj, 002.png, ...

# load landmarks (frame-level)
import pandas as pd
landmarks = pd.read_parquet(
    hf_hub_download(
        repo_id="arvinsingh/welsh-speech-landmarks",
        filename="landmarks.parquet",
        repo_type="dataset"
    )
)

# join landmarks with main metadata to get fluency scores
merged = landmarks.merge(metadata, on=['speaker_id', 'phrase_id'])

Citation

If you use this dataset, please cite both the paper and the dataset:

@inproceedings{bali_2026_cymrufluency,
  author       = {Bali, Arvinder Pal Singh and
                  Tam, Gary KL and
                  Siris, Avishek and
                  Andrews, Gareth and
                  Lai, Yukun and
                  Tiddeman, Bernie and
                  Ffrancon, Gwenno},
  title        = {CymruFluency - A Fusion Technique and a 4D Welsh Dataset for Welsh Fluency Analysis},
  booktitle    = {Advanced Concepts for Intelligent Vision Systems},
  pages        = {96--108},
  year         = 2026,
  publisher    = {Springer Nature Switzerland},
  doi          = {10.1007/978-3-032-07343-3_8},
  url          = {https://doi.org/10.1007/978-3-032-07343-3_8},
}

@dataset{bali_2025_dataset,
  author       = {Bali, Arvinder Pal Singh and
                  Tam, Gary KL and
                  Siris, Avishek and
                  Andrews, Gareth and
                  Lai, Yukun and
                  Tiddeman, Bernie and
                  Ffrancon, Gwenno},
  title        = {Dataset and code for "CymruFluency - A fusion technique and a 4D Welsh dataset for Welsh fluency analysis"},
  month        = may,
  year         = 2025,
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.15397513},
  url          = {https://doi.org/10.5281/zenodo.15397513},
}

Original Data

The original dataset is published on Zenodo: 10.5281/zenodo.15397513

License

Creative Commons Attribution-NonCommercial 4.0 International License.

Acknowledgments

Dataset collected using 3DMD facial capture technology. All frames manually annotated with ibug68 facial landmarks.