Datasets:
metadata
language:
- he
license: mit
task_categories:
- automatic-speech-recognition
tags:
- speech-recognition
- lyrics
- hebrew
- music
pretty_name: Caspi STT Benchmark
size_categories:
- n<1K
Caspi STT Benchmark
Hebrew speech-to-text (STT) evaluation dataset built from Mati Caspi songs: YouTube audio plus reference lyrics, packaged for Hugging Face.
Dataset description
- Audio: 16 kHz mono WAV segments (one row per track or segment).
- Text: Reference transcript (lyrics or song title when lyrics are missing).
- Metadata:
id,youtube_id,title,song_name.
Intended for STT benchmarking: compare model transcriptions to text (e.g. WER/CER).
How to use
from datasets import load_dataset, Audio
ds = load_dataset("ozlabs/caspi", split="train")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
# Example row
ex = ds[0]
# ex["audio"] → decoded array; ex["text"] → reference transcript
Source
- Audio: extracted from YouTube (playlists) at 16 kHz mono.
- Lyrics: provided manually or via Shazam;
License
MIT (or as specified in the repo). Audio and lyrics are used for research/evaluation; respect YouTube and lyric providers’ terms.