xg-chu commited on
Commit
87ca0ed
·
verified ·
1 Parent(s): c1fd915

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - webdataset
6
+ - audio
7
+ - motion
8
+ ---
9
+
10
+ ## UniLS-Talk Dataset
11
+
12
+ To enable research on unified speaking and listening avatar generation, we curate and construct the **UniLS-Talk Dataset**, a large-scale collection of high-quality 3D facial motion data. We apply a carefully designed tracking pipeline to extract per-frame [FLAME](https://flame.is.tue.mpg.de/) parameters, including expression coefficients, eye-gaze, jaw pose and head pose annotations. The dataset comprises two complementary parts:
13
+
14
+ - **Paired conversational data** sourced from the [Seamless Interaction](https://ai.meta.com/research/seamless-interaction/) dataset, providing synchronized dual-speaker videos with natural turn-taking dynamics between speaking and listening.
15
+ - **Unpaired multi-scenario data** aggregated from CelebV, TalkingHead-1KH, TEDTalk, VFHQ, and other in-the-wild videos, covering diverse facial behaviors across identities and environments (news broadcasts, interviews, casual talking, etc.).
16
+
17
+ | Category | Source | Hours | Audio | Motion |
18
+ |----------|--------|-------|-------|--------|
19
+ | **Paired Conversational** | Seamless Interaction Dataset | 657.5 h | ✅ | ✅ |
20
+ | **Unpaired Multi-Scenario** | Diverse identities and environments from in-the-wild videos | **546.5 h** | ❌ | ✅ |
21
+ | **Total** | | **1,204 h** | | |
22
+
23
+ The paired conversational data is split into **622.5 hours** for training, **4.8 hours** for validation, and **30.2 hours** for testing. All data includes FLAME expression parameters, jaw and head pose, and eye-gaze annotations at 25 fps.