Datasets:
pretty_name: Rhubarb Viseme Dataset
tags:
- visemes
- lip-sync
- rhubarb-lip-sync
- speech-processing
language:
- en
license: other
size_categories:
- 1K<n<10K
task_categories:
- audio-classification
annotations_creators:
- machine-generated
language_creators:
- machine-generated
multilinguality:
- monolingual
dataset_info:
features:
- name: audio_path
dtype: string
- name: sample_rate_target
dtype: int64
- name: win_length_ms
dtype: float64
- name: hop_length_ms
dtype: float64
- name: n_visemes
dtype: int64
- name: viseme_list
sequence: string
- name: metadata
struct:
- name: soundFile
dtype: string
- name: duration
dtype: float64
- name: mouthCues
sequence:
struct:
- name: start
dtype: float64
- name: end
dtype: float64
- name: value
dtype: string
- name: mouth_cues
sequence:
struct:
- name: start
dtype: float64
- name: end
dtype: float64
- name: value
dtype: string
- name: num_feature_frames
dtype: int64
- name: frame_labels
sequence: int64
splits:
- name: train
num_examples: 1244
download_size: 35MB
dataset_size: 35MB
Dataset
Overview
This dataset contains 1,244 English speech alignment files with frame-level viseme labels produced via Rhubarb Lip Sync. Each alignment is expressed at a 16 kHz target sample rate with a 25 ms analysis window and 10 ms hop, and retains the relative path to the corresponding WAV used during generation.
Key figures:
- Total aligned duration:
14,280 s (3.97 h) across 1,244 clips - Clip length: mean 11.48 s, median 9.15 s, min 0.84 s, max 38.21 s (5th-95th percentiles: 2.55-29.54 s)
- Frame count: 1,426,112 total (~1,146 per clip on average)
- Speakers: 52 synthetic voices with 22-25 clips each (mean 23.9 clips per speaker)
- Visemes: 9 classes (
A,B,C,D,E,F,G,H,X) as detailed below
| Viseme | Frames | Share |
|---|---|---|
| A | 76,124 | 5.34% |
| B | 630,457 | 44.21% |
| C | 229,139 | 16.07% |
| D | 49,494 | 3.47% |
| E | 139,937 | 9.81% |
| F | 93,707 | 6.57% |
| G | 44,553 | 3.12% |
| H | 31,499 | 2.21% |
| X (rest) | 131,202 | 9.20% |
The shortest aligned clip is Sarah_just_ask_me_1761254304995.wav (0.84 s); the longest is Thomas_A_labyrinthine_lexicon_unfurls_1761275991237.wav (38.21 s).
Directory Layout
.
manifest.jsonl # one JSON object per clip pointing to its alignment
alignments/
<clip>.alignment.json # frame-level labels and metadata
<clip>.rhubarb.json # raw mouthCue output from Rhubarb
viseme_map.json # viseme -> index mapping used in frame labels
DATASET.md # this document
manifest.jsonl is newline-delimited JSON. Each record exposes the relative path to its corresponding alignment file, for example:
{"alignment_path": "alignments/Adam_Beneath_the_glacier_s_glassy_g_1761276211449.alignment.json"}
Alignment Files
Every <clip>.alignment.json contains:
audio_path: relative path to the WAV associated with the alignmentsample_rate_target,win_length_ms,hop_length_ms: analysis parameters (16 kHz target, 25 ms window, 10 ms hop)viseme_list: ordered viseme labels;frame_labelsstores integer indices into this listmouth_cues: the original intervals from Rhubarb (matching the companion<clip>.rhubarb.json)num_feature_frames: total frames inframe_labelsframe_labels: per-frame viseme indices aligned to analysis hops (first frame begins at t=0)
Reconstructing the effective clip duration from metadata uses (num_feature_frames - 1) * hop + win (with hop and win expressed in seconds).
Generation Workflow
To regenerate alignments, use the script below:
- Place WAV files under a source directory (e.g.,
audio/). - Run the script with matching arguments so it can locate the WAVs, call the Rhubarb binary, and write the JSON artifacts.
- Inspect
manifest.jsonl,alignments/, andviseme_map.jsonfor the resulting outputs.
Example invocation:
python generate_dataset.py --wavs_dir audio --out_dir dataset --rhubarb_path /path/to/rhubarb --sr 16000 --win_length_ms 25 --hop_length_ms 10
Reference Script: generate_dataset.py
#!/usr/bin/env python3
"""
Generate frame-aligned viseme labels using Rhubarb Lip Sync.
For each WAV:
1) Call Rhubarb to get mouthCues JSON: [{"start": float, "end": float, "value": str}, ...]
2) Align to fixed feature hop (e.g., 10 ms) and window (e.g., 25 ms)
3) Save per-file alignment and a manifest.jsonl for training
Example:
python generate_dataset.py \
--wavs_dir data/wavs \
--out_dir data/dataset \
--rhubarb_path rhubarb \
--sr 16000 --win_length_ms 25 --hop_length_ms 10
"""
import argparse
import json
import math
import os
import subprocess
from pathlib import Path
from typing import List, Dict, Any
import numpy as np
import soundfile as sf # <-- use soundfile for audio metadata (samplerate, frames, duration)
# === Adjust if your Rhubarb setup uses a different viseme set/order ===
VISEMES = ["A", "B", "C", "D", "E", "F", "G", "H", "X"]
VI2IDX = {v: i for i, v in enumerate(VISEMES)}
def find_wavs(root: Path) -> List[Path]:
return sorted([p for p in root.rglob("*.wav") if p.is_file()])
def run_rhubarb(wav_path: Path, rhubarb_path: str, json_out: Path) -> None:
json_out.parent.mkdir(parents=True, exist_ok=True)
cmd = [rhubarb_path, "-f", "json", "-o", str(json_out), str(wav_path)]
# You may add flags like: "--datacost 1.0" or "--extendedShapes" if desired.
completed = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if completed.returncode != 0 or not json_out.exists():
raise RuntimeError(
f"Rhubarb failed for {wav_path}\nSTDOUT:\n{completed.stdout}\nSTDERR:\n{completed.stderr}"
)
def load_mouth_cues(json_path: Path) -> List[Dict[str, Any]]:
with open(json_path, "r", encoding="utf-8") as f:
data = json.load(f)
# Rhubarb JSON typically has {"mouthCues": [{"start":..,"end":..,"value":"A"}, ...]}
cues = data["mouthCues"] if "mouthCues" in data else data
return cues
def time_to_frame_idx(t_sec: float, hop_sec: float) -> int:
# Frame index for frame whose "start" is at t_sec (STFT hop grid)
# We align labels to frames whose start times are k * hop_sec
return int(math.floor(t_sec / hop_sec + 1e-8))
def build_frame_labels_from_cues(
cues: List[Dict[str, Any]],
num_frames: int,
hop_sec: float,
) -> np.ndarray:
"""
Convert interval cues to frame labels aligned to frames starting at k*hop_sec.
For each frame k, with frame time t = k*hop_sec, we assign the cue whose [start, end)
contains t + epsilon. Default to 'X' if none covers.
This yields a strictly causal alignment: label for frame k depends on cue at that time.
"""
labels = np.full((num_frames,), fill_value=VI2IDX["X"], dtype=np.int64)
# Build an efficient pointer sweep over sorted cues
sorted_cues = sorted(cues, key=lambda c: c["start"])
j = 0
for k in range(num_frames):
t = k * hop_sec # start time of frame k
# advance j while cue ends before t
while j < len(sorted_cues) and sorted_cues[j].get("end", 1e9) <= t:
j += 1
# check current cue covers t
if j < len(sorted_cues):
c = sorted_cues[j]
if c["start"] <= t < c.get("end", 1e9):
vis = c["value"]
if vis in VI2IDX:
labels[k] = VI2IDX[vis]
else:
# Unknown label falls back to rest
labels[k] = VI2IDX["X"]
return labels
def main():
ap = argparse.ArgumentParser()
ap.add_argument("--wavs_dir", type=str, required=True, help="Root directory containing WAV files")
ap.add_argument("--out_dir", type=str, required=True, help="Output dataset directory")
ap.add_argument("--rhubarb_path", type=str, default="rhubarb", help="Path to Rhubarb binary")
ap.add_argument("--sr", type=int, default=16000, help="Target sample rate for consistent framing")
ap.add_argument("--win_length_ms", type=float, default=25.0, help="STFT window length (ms)")
ap.add_argument("--hop_length_ms", type=float, default=10.0, help="STFT hop length (ms)")
ap.add_argument("--overwrite", action="store_true", help="Overwrite existing JSONs/labels")
args = ap.parse_args()
wavs_dir = Path(args.wavs_dir).resolve()
out_dir = Path(args.out_dir).resolve()
align_dir = out_dir / "alignments"
align_dir.mkdir(parents=True, exist_ok=True)
manifest_path = out_dir / "manifest.jsonl"
win_length = int(round(args.win_length_ms * 1e-3 * args.sr))
hop_length = int(round(args.hop_length_ms * 1e-3 * args.sr))
hop_sec = hop_length / args.sr
wav_paths = find_wavs(wavs_dir)
if not wav_paths:
raise SystemExit(f"No WAV files found under {wavs_dir}")
if manifest_path.exists() and not args.overwrite:
print(f"[INFO] {manifest_path} exists. Appending new entries (use --overwrite to recreate).")
with open(manifest_path, "w" if args.overwrite else "a", encoding="utf-8") as man_f:
for wav in wav_paths:
rel_audio = os.path.relpath(wav, start=out_dir)
base = wav.stem
rhubarb_json = align_dir / f"{base}.rhubarb.json"
alignment_json = align_dir / f"{base}.alignment.json"
if (not rhubarb_json.exists()) or args.overwrite:
print(f"[Rhubarb] {wav.name}")
run_rhubarb(wav, args.rhubarb_path, rhubarb_json)
else:
print(f"[SKIP Rhubarb] {wav.name} (exists)")
# Load audio duration and samplerate using soundfile (no full load)
try:
info = sf.info(str(wav))
except RuntimeError as e:
raise SystemExit(f"Failed to read audio info for {wav}: {e}")
orig_sr = info.samplerate
num_frames_audio = info.frames
duration_sec = num_frames_audio / float(orig_sr)
# Compute number of STFT frames at target SR/hop/window
# Resampling does not change duration
duration_at_target = duration_sec
# Valid STFT frames start at t where t + win_length_sec <= duration
win_sec = win_length / args.sr
if duration_at_target < win_sec:
print(f"[WARN] {wav.name} shorter than window; skipping.")
continue
max_k = math.floor((duration_at_target - win_sec) / hop_sec)
num_feat_frames = max_k + 1
cues = load_mouth_cues(rhubarb_json)
labels = build_frame_labels_from_cues(cues, num_feat_frames, hop_sec)
# Save alignment
alignment_payload = {
"audio_path": rel_audio.replace("\\", "/"),
"sample_rate_target": args.sr,
"win_length_ms": args.win_length_ms,
"hop_length_ms": args.hop_length_ms,
"n_visemes": len(VISEMES),
"viseme_list": VISEMES,
"mouth_cues": cues,
"num_feature_frames": int(num_feat_frames),
"frame_labels": labels.tolist(),
}
with open(alignment_json, "w", encoding="utf-8") as f:
json.dump(alignment_payload, f, ensure_ascii=False, indent=2)
# Write manifest line
man_f.write(json.dumps({
"alignment_path": os.path.relpath(alignment_json, start=out_dir).replace("\\", "/")
}) + "\n")
# Save viseme map for reference
with open(out_dir / "viseme_map.json", "w", encoding="utf-8") as f:
json.dump({v: i for i, v in enumerate(VISEMES)}, f, indent=2)
print("\n[Done] Alignments and manifest created.")
print(f" - Manifest: {manifest_path}")
print(f" - Alignments: {align_dir}")
if __name__ == "__main__":
main()