Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
word: string
speaker: string
sample_rate: double
num_frames: int64
time: list<item: double>
velum_opening: list<item: double>
lip_aperture: list<item: double>
tongue_tip_constriction: list<item: double>
tongue_body_constriction: list<item: double>
tube_areas: list<item: list<item: double>>
vs
word: string
speaker: string
sample_rate: int64
num_frames: int64
time: double
velum_opening: double
lip_aperture: double
tongue_tip_constriction: double
tongue_body_constriction: double
tube_areas: list<item: double>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 588, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
word: string
speaker: string
sample_rate: double
num_frames: int64
time: list<item: double>
velum_opening: list<item: double>
lip_aperture: list<item: double>
tongue_tip_constriction: list<item: double>
tongue_body_constriction: list<item: double>
tube_areas: list<item: list<item: double>>
vs
word: string
speaker: string
sample_rate: int64
num_frames: int64
time: double
velum_opening: double
lip_aperture: double
tongue_tip_constriction: double
tongue_body_constriction: double
tube_areas: list<item: double>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
VTL Speech Landmarks Dataset
Articulatory speech synthesis dataset with acoustic landmarks, generated using VocalTractLab (VTL).
Dataset Description
This dataset contains synthesized speech for 117,497 English words from the CMU Pronouncing Dictionary, generated with two speakers (male and female). Each word includes:
- Audio: 48kHz WAV files
- Landmarks: Acoustic-phonetic event markers (JSON)
- Articulatory data: Full vocal tract trajectories from VTL (JSON)
Speakers
| Speaker | Base F0 | Emphasis |
|---|---|---|
| Male | 120 Hz | 1.2 |
| Female | 200 Hz | 1.4 |
Dataset Structure
vtl-speech-landmarks/
├── male/
│ ├── wav/ # 117,497 audio files
│ ├── landmarks/ # 117,497 landmark JSON files
│ └── articulatory/ # 117,497 articulatory JSON files
└── female/
├── wav/ # 117,497 audio files
├── landmarks/ # 117,497 landmark JSON files
└── articulatory/ # 117,497 articulatory JSON files
Landmark Types
| Type | Name | Description |
|---|---|---|
| V | Vowel | Maximum mid-frequency energy |
| G | Glide | Energy change in formant region (for r, l, w, y) |
| Sc | Stop Closure | Start of oral closure |
| Sr | Stop Release | Burst energy when stop releases |
| Fc | Fricative Closure | Peak frication turbulence onset |
| Fr | Fricative Release | Transition out of fricative |
| Nc | Nasal Closure | Abrupt change when nasal begins |
| Nr | Nasal Release | Abrupt change when nasal ends |
File Formats
Landmarks JSON (*_landmarks.json)
{
"word": "hello",
"pronunciation": "HH AH0 L OW1",
"arpabet": ["HH", "AH0", "L", "OW1"],
"vtl_phonemes": ["h", "@", "l", "O", "U"],
"speaker": "male",
"duration_ms": 520.0,
"sample_rate": 48000,
"landmarks": [
{"type": "Fc", "time_ms": 50.0, "phoneme": "h", "ipa": "h", "confidence": 0.95},
{"type": "V", "time_ms": 180.0, "phoneme": "@", "ipa": "ʌ", "confidence": 0.92},
...
],
"phoneme_timings": [
{"phoneme": "h", "start": 0.05, "end": 0.12, "duration": 0.07},
...
]
}
Articulatory JSON (*_articulatory.json)
{
"sample_rate": 400.0,
"num_frames": 208,
"time": [0.0, 0.0025, 0.005, ...],
"velum_opening": [...],
"lip_aperture": [...],
"tongue_tip_constriction": [...],
"tongue_body_constriction": [...],
"tube_areas": [[...], [...], ...]
}
Audio (WAV)
- Format: 16-bit PCM
- Sample rate: 48,000 Hz
- Channels: Mono
Usage
Download specific files
from huggingface_hub import hf_hub_download
# Download a landmark file
landmarks_path = hf_hub_download(
repo_id="mcamara/vtl-speech-landmarks",
filename="male/landmarks/hello_landmarks.json",
repo_type="dataset"
)
# Download audio
audio_path = hf_hub_download(
repo_id="mcamara/vtl-speech-landmarks",
filename="male/wav/hello.wav",
repo_type="dataset"
)
Load landmarks
import json
with open(landmarks_path) as f:
data = json.load(f)
print(f"Word: {data['word']}")
print(f"Duration: {data['duration_ms']} ms")
for lm in data['landmarks']:
print(f" {lm['type']} at {lm['time_ms']:.1f} ms - {lm['phoneme']} ({lm['ipa']})")
Load and play audio
import soundfile as sf
audio, sr = sf.read(audio_path)
print(f"Sample rate: {sr}, Duration: {len(audio)/sr:.2f}s")
Generation Details
- Source: CMU Pronouncing Dictionary (117,497 words)
- Synthesizer: VocalTractLab (VTL) articulatory speech synthesizer
- Landmark extraction: Energy-based detection from spectral analysis
- Phoneme mapping: ARPABET to VTL phoneme conversion with diphthong expansion
Applications
- Acoustic-phonetic research
- Speech recognition training data
- Text-to-speech development
- Phonetic landmark detection models
- Articulatory synthesis research
Citation
If you use this dataset, please cite:
@misc{vtl-speech-landmarks,
title={VTL Speech Landmarks Dataset},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/mcamara/vtl-speech-landmarks}}
}
License
MIT License
- Downloads last month
- 42