Streamer / README.md
mumuwei's picture
Update README.md
889ab5c verified
metadata
license: cc-by-nc-4.0

GestureHYDRA: Semantic Co-speech Gesture Synthesis via Hybrid Modality Diffusion Transformer and Cascaded-Synchronized Retrieval-Augmented Generation.ICCV 2025

Dataset description

  • The Streamer dataset we proposed consists of 281 anchors and 20,969 clips of data.
  • The training set contains 19,051 clips of data and 269 anchors.
  • The seen test set contains 920 clips of data, where the anchor ID has appeared in the training set.
  • The unseen test set contains 998 clips of data, where the anchor ID has never appeared in the training set.

Dataset structure

The dataset contains three folders: train, test_seen, and test_unseen. Let's train train as an example for introduction.

-train/
├── audios/
│   ├── {anchor_id}/
│   │   └── {video_name_md5}/
│   │       └── {start_time}_{end_time}.wav
├── gestures/
│   ├── {anchor_id}/
│   │   └── {video_name_md5}/
│   │       └── {start_time}_{end_time}.pkl

The audio data in the audios folder corresponds one-to-one with the human motion data in the gestures folder.

Data contained in the pkl file:

  • width, height: the video width and height
  • center: the center point of the video
  • batch_size: the sequence length
  • camera_transl: the displacement of the camera
  • focal_length: the pixel focal length of a camera
  • body_pose_axis: (bs, 21x3)
  • jaw_pose: (bs,3)
  • betas: (1,10)
  • global_orient: (bs,3)
  • transl: (bs,3)
  • left_hand_pose: (bs,15x3)
  • right_hand_pose: (bs,15x3)
  • leye_pose: (bs,3)
  • reye_pose: (bs,3)
  • pose_embedding: (bs,32)