The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 193, in _generate_tables
examples = [ujson_loads(line) for line in batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
UniLS-Talk Dataset
To enable research on unified speaking and listening avatar generation, we curate and construct the UniLS-Talk Dataset, a large-scale collection of high-quality 3D facial motion data. We apply a carefully designed tracking pipeline to extract per-frame FLAME parameters, including expression coefficients, eye-gaze, jaw pose and head pose annotations. The dataset comprises two complementary parts:
- Paired conversational data sourced from the Seamless Interaction dataset, providing synchronized dual-speaker videos with natural turn-taking dynamics between speaking and listening.
- Unpaired multi-scenario data aggregated from CelebV, TalkingHead-1KH, TEDTalk, VFHQ, and other in-the-wild videos, covering diverse facial behaviors across identities and environments (news broadcasts, interviews, casual talking, etc.).
| Category | Source | Hours | Audio | Motion |
|---|---|---|---|---|
| Paired Conversational | Seamless Interaction Dataset | 657.5 h | ✅ | ✅ |
| Unpaired Multi-Scenario | Diverse identities and environments from in-the-wild videos | 546.5 h | ❌ | ✅ |
| Total | 1,204 h |
The paired conversational data is split into 622.5 hours for training, 4.8 hours for validation, and 30.2 hours for testing. All data includes FLAME expression parameters, jaw and head pose, and eye-gaze annotations at 25 fps.
- Downloads last month
- 15