video
video |
|---|
InterAct Dataset
InterAct is a multi-modal two-person interaction dataset for research in human motion, facial expressions, and speech. For details, please refer to our webpage.
Quick Start
A Quick Start Jupyter notebook is provided at quickstart.ipynb. It covers examples for:
- Querying the scenario and actor databases
- Finding actor pairs for a recording session
- Loading performance data (BVH, face parameters, audio)
- Visualizing face blendshapes over time
- Loading both actors in a two-person interaction
Repository Structure
Database Files
scenarios.db
SQLite database containing scenario metadata with the following tables:
scenarios: Contains scenario definitions
id(INTEGER): Scenario ID (used in filenames)relationship_id(INTEGER): FK to relationships tableprimary_emotion_id(INTEGER): FK to emotions tablecharacter_setup(TEXT): Character context descriptionscenario(TEXT): Scenario description
relationships: Relationship types between actors (e.g., "architect / contractor", "boss / subordinate")
id(INTEGER): Relationship IDname(VARCHAR): Relationship description
emotions: Primary emotion categories (e.g., "admiration", "anger", "amusement")
id(INTEGER): Emotion IDname(VARCHAR): Emotion name
actors.db
SQLite database containing actor and session information:
actors: Actor metadata
actor_id(TEXT): Three-digit actor ID (e.g., "001", "002")gender(TEXT): "male" or "female"
sessions: Recording session information
date(TEXT): Session date in YYYYMMDD formatmale_id(TEXT): Actor ID of the male participantfemale_id(TEXT): Actor ID of the female participant
Data Directories
Motion and facial data are provided here at 30 fps. The performance data files follow this naming convention:
<date>_<actor_id>_<scenario_id>.<extension>
Example: 20231119_001_051.bvh = recorded on 2023-11-19, actor 001, scenario 51
bvhs/
BVH motion capture files of the performances.
bvhs_retarget/
Retargeted BVH files for use in body_to_render.blend.
face_ict/
Facial blendshape parameters in ICT-FaceKit format (shape: (N, 55)). Suitable for training models and rendering with face_ict_to_render.blend.
face_arkit/
Facial blendshape parameters in ARKit format (shape: (N, 51)). Used in body_to_render.blend for full body visualization.
face_ict_templates/
Base mesh templates in ICT-FaceKit topology, named by actor ID (e.g., 001.obj). Useful for training models.
wav/
Audio recordings from each actor in each performance.
body_renders/
Pre-rendered full-body visualizations (body + face + audio) as MP4 videos. These files use a different naming convention since they contain both actors:
<date>_<scenario_id>.mp4
Example: 20231119_051.mp4 = scenario 51 recorded on 2023-11-19
lip_acc/
Additional 1-hour facial dataset with attention to accuracy of lip shapes and pronunciation. Only one actor (006) was captured in this dataset, and the scenario_id of these files correspond to the order of the sentences in lip_acc_sentences.txt. Useful for fine-tuning.
Scripts (scripts/)
Blender Files
body_to_render.blend: Blender project for rendering full-body (face+body) visualizations. Contains pre-configured character rigs mapped to actor IDs. The "composite scene in dataset" script reads job files, composites both actors with BVH body motion frombvhs_retarget/and ARKit face blendshapes fromface_arkit/. The "render all scenes" script renders MKV videos tobody_renders_noaudio/.face_ict_to_render.blend: Blender project for rendering face-only visualizations using ICT-FaceKit topology. Contains pre-configured actor mesh scenes (mesh-001,mesh-002, etc.) and a "composite scenes and render" script that reads job files, loads blendshape animations fromface_ict/, and renders 1080x1080 PNG sequences at 30fps using EEVEE. Output goes toface_renders_noaudio/.
Conversion Scripts
face_ict_to_arkit.py: Converts ICT-FaceKit blendshape parameters (55 blendshapes) to ARKit format (51 blendshapes). Merges certain blendshape pairs and removes unused indices.face_ict_to_vertices.py: Converts ICT blendshape parameters to vertex sequences using the blendshape basis matrix. Outputs per-frame vertex positions as numpy arrays with shape(N, V*3), where coordinates are packed contiguously per vertex:[v1x, v1y, v1z, v2x, v2y, v2z, ...].
Render Utilities
render_add_audio.py: Combines rendered video with audio tracks. Supports both face renders (single actor) and body renders (mixed audio from both actors).
Data Files
blendshape_ict.npy: ICT-FaceKit blendshape basis matrix used for converting blendshape parameters to vertex offsets, used inface_ict_to_vertices.py.
Job Files
We recommend using a job file and splitting the rendering into batches, as opposed to rendering all scenarios in one go.
example_body_render_job.txt: Example job file listing scenes to render in body format (<date>_<scenario_id>).example_face_render_job.txt: Example job file listing scenes to render in face format (<date>_<actor_id>_<scenario_id>).
Errata
- The face files for
20240126_006_034is unavailable due to a conversion issue. When rendering the scene inbody_to_render.blend, the female face blendshape animations are not applied.
Acknowledgements
body_to_render.blend is based on the visualization Blender project kindly provided by the BEAT dataset authors.
If you used InterAct as part of your research, please cite as following:
@article{ho2025interact,
title={InterAct: A Large-Scale Dataset of Dynamic, Expressive and Interactive Activities between Two People in Daily Scenarios},
author={Ho, Leo and Huang, Yinghao and Qin, Dafei and Shi, Mingyi and Tse, Wangpok and Liu, Wei and Yamagishi, Junichi and Komura, Taku},
journal={Proceedings of the ACM on Computer Graphics and Interactive Techniques},
volume={8},
number={4},
pages={1--27},
year={2025},
publisher={ACM New York, NY},
doi={10.1145/3747871}
}
- Downloads last month
- 43