The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SoulX-FlashHead: Oracle-guided Generation of Infinite Real-time Streaming Talking Heads
Tan Yu*, Qian Qiao*✉, Le Shen*, Ke Zhou, Jincheng Hu, Dian Sheng, Bo Hu, Haoming Qin, Jun Gao, Changhai Zhou, Shunshun Yin, Siyuan Liu ✉
*Equal Contribution ✉Corresponding Author
VividHead Dataset
Highlights
- 🔥 Large-scale, high-quality talking-head dataset with 330K clips and 782 hours of head-cropped videos
- 🔥 Broad diversity across 15+ languages and a wide age range (0–60+)
- 🔥 Rich annotations including age, gender, ethnicity, and language
- 🔥 Unified and standardized processing, with a consistent FPS = 25 and resolution = 512 × 512
ShowCase
🌰 Examples
Dataset Statistics
This dataset exhibits strong diversity across multiple dimensions:
- Duration: 3s–60s+, bimodal (peaks ~5s, ~10s), mean 8.37s; most clips in 3–15s.
- Age: 31–45 (432.5h), 19–30 (277.2h), 46–60 (61.3h), 60+ (10.4h), 0–19 (0.2h).
- Language (Top 10): English (651.4h), Chinese (67.5h), Russian (8.7h), Spanish (7.1h), Portuguese (6.4h), Welsh (5.4h), Hindi (5.3h), German (3.6h), French (3.0h), Korean (2.7h); 15+ languages in total.
- Gender & ethnicity: Male (552.8h), Female (229.0h); White (506.7h), Asian (113.1h), Latino/Hispanic (56.5h), Middle Eastern (42.9h), Black (36.4h).
![]() Duration |
![]() Age group |
![]() Language (Top 10) |
![]() Gender & ethnicity |
Comparison with Other Datasets
| Dataset | Speakers | Face Crop | Clips | Hours | Resolution | Language | Age | Ethnicity | Source |
|---|---|---|---|---|---|---|---|---|---|
| MEAD | 60 | ✅ | 281.4K | 39 | 384p | English | 20–35 | – | Lab |
| HDTF | 362 | ✅ | 10K | 15.8 | 512p | – | – | – | Wild |
| AVSpeech | 150K | ❌ | 2.5M | 4700 | 720p, 1080p | – | – | – | Wild |
| Hallo3 | – | ✅ | 101.5K | 70 | 720p | – | – | – | Wild |
| OpenHumanVid | – | ❌ | 13.4M | 16.7K | 720p | – | – | – | Wild |
| TalkVid | 7,729 | ❌ | 281.4K | 1244 | 1080p, 2160p | 15 lang. | 0–60+ | 3 | Wild |
| SpeakerVid | 83K | ❌ | 5.2M | 8.7K | 1080p | – | – | – | Wild |
| Ours | 60K | ✅ | 330K | 782 | 512p | 15 lang. | 0–60+ | 3 | Wild |
Data Processing Pipeline
Our data processing pipeline is designed to construct a large-scale, high-quality talking-head dataset through systematic preprocessing, filtering, and annotation, ensuring sample uniqueness, temporal consistency, and reliable multi-modal supervision.
Data Preprocessing Stage
- Data collection: Aggregates initial content from Web videos and various Open-source videos to build a diverse raw data pool.
- Deduplication & Slicing: Employs MD5 hash verification to eliminate redundant content and uses PySceneDetect to divide long videos into coherent clips ranging from 3 to 60+ seconds.
- Standardize to 25 FPS: Normalizes all video clips to a uniform frame rate of 25 FPS using FFMPEG to ensure temporal consistency for model training.
Data Filter & Annotation Stage
- Face detection & crop: Detects face visibility and crops valid sequences into a centered $512 \times 512$ resolution.
- Jump cut detection: Uses optical flow analysis to identify and exclude sequences containing scene discontinuities or abrupt transitions.
- Faceless filter: Screens and excludes frames where a detectable face is missing or the head region is improperly framed.
- DWpose extraction & hand-filter: Extracts body keypoints and strictly removes clips featuring hand-over-face occlusion to prevent generation artifacts.
- Lip-sync: Utilizes the SyncNet model to calculate confidence scores (LSE-C and LSE-D), discarding any samples with poor audio-visual alignment.
- Audio feature & attribute labeling: Extracts robust streaming features via Wav2Vec and annotates metadata including language, ethnicity, age, and gender.
📚 Citation
If you find our work useful in your research, please consider citing:
@misc{yu2026soulxflashheadoracleguidedgenerationinfinite,
title={SoulX-FlashHead: Oracle-guided Generation of Infinite Real-time Streaming Talking Heads},
author={Tan Yu and Qian Qiao and Le Shen and Ke Zhou and Jincheng Hu and Dian Sheng and Bo Hu and Haoming Qin and Jun Gao and Changhai Zhou and Shunshun Yin and Siyuan Liu},
year={2026},
eprint={2602.07449},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.07449},
}
License
Our VividHead dataset is released under the CC-BY-4.0 license and is intended for research and non-commercial purposes. The video samples are collected from publicly available datasets.
- Downloads last month
- 137



