File size: 6,593 Bytes
152ba83 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
license: cc-by-nc-sa-4.0
language:
- en
pretty_name: "InterAct Dataset: Two-Person Multimodal"
tags:
- motion-capture
- motion-generation
- motion-models
- social-robotics
- computer-vision
size_categories:
- 1K<n<10K
---
# InterAct Dataset
InterAct is a multi-modal two-person interaction dataset for research in human motion, facial expressions, and speech. For details, please refer to [our webpage](https://hku-cg.github.io/interact/).
## Quick Start
A Quick Start Jupyter notebook is provided at `quickstart.ipynb`. It covers examples for:
1. Querying the scenario and actor databases
2. Finding actor pairs for a recording session
3. Loading performance data (BVH, face parameters, audio)
4. Visualizing face blendshapes over time
5. Loading both actors in a two-person interaction
## Repository Structure
### Database Files
#### `scenarios.db`
SQLite database containing scenario metadata with the following tables:
- **scenarios**: Contains scenario definitions
- `id` (INTEGER): Scenario ID (used in filenames)
- `relationship_id` (INTEGER): FK to relationships table
- `primary_emotion_id` (INTEGER): FK to emotions table
- `character_setup` (TEXT): Character context description
- `scenario` (TEXT): Scenario description
- **relationships**: Relationship types between actors (e.g., "architect / contractor", "boss / subordinate")
- `id` (INTEGER): Relationship ID
- `name` (VARCHAR): Relationship description
- **emotions**: Primary emotion categories (e.g., "admiration", "anger", "amusement")
- `id` (INTEGER): Emotion ID
- `name` (VARCHAR): Emotion name
#### `actors.db`
SQLite database containing actor and session information:
- **actors**: Actor metadata
- `actor_id` (TEXT): Three-digit actor ID (e.g., "001", "002")
- `gender` (TEXT): "male" or "female"
- **sessions**: Recording session information
- `date` (TEXT): Session date in YYYYMMDD format
- `male_id` (TEXT): Actor ID of the male participant
- `female_id` (TEXT): Actor ID of the female participant
---
### Data Directories
Motion and facial data are provided here at **30 fps**. The performance data files follow this naming convention:
```
<date>_<actor_id>_<scenario_id>.<extension>
```
Example: `20231119_001_051.bvh` = recorded on 2023-11-19, actor 001, scenario 51
#### `bvhs/`
BVH motion capture files of the performances.
#### `bvhs_retarget/`
Retargeted BVH files for use in `body_to_render.blend`.
#### `face_ict/`
Facial blendshape parameters in ICT-FaceKit format (shape: `(N, 55)`). Suitable for training models and rendering with `face_ict_to_render.blend`.
#### `face_arkit/`
Facial blendshape parameters in ARKit format (shape: `(N, 51)`). Used in `body_to_render.blend` for full body visualization.
#### `face_ict_templates/`
Base mesh templates in ICT-FaceKit topology, named by actor ID (e.g., `001.obj`). Useful for training models.
#### `wav/`
Audio recordings from each actor in each performance.
#### `body_renders/`
Pre-rendered full-body visualizations (body + face + audio) as MP4 videos. These files use a different naming convention since they contain both actors:
```
<date>_<scenario_id>.mp4
```
Example: `20231119_051.mp4` = scenario 51 recorded on 2023-11-19
#### `lip_acc/`
Additional 1-hour facial dataset with attention to accuracy of lip shapes and pronunciation. Only one actor (006) was captured in this dataset, and the `scenario_id` of these files correspond to the order of the sentences in `lip_acc_sentences.txt`. Useful for fine-tuning.
---
### Scripts (`scripts/`)
#### Blender Files
- **`body_to_render.blend`**: Blender project for rendering full-body (face+body) visualizations. Contains pre-configured character rigs mapped to actor IDs. The "composite scene in dataset" script reads job files, composites both actors with BVH body motion from `bvhs_retarget/` and ARKit face blendshapes from `face_arkit/`. The "render all scenes" script renders MKV videos to `body_renders_noaudio/`.
- **`face_ict_to_render.blend`**: Blender project for rendering face-only visualizations using ICT-FaceKit topology. Contains pre-configured actor mesh scenes (`mesh-001`, `mesh-002`, etc.) and a "composite scenes and render" script that reads job files, loads blendshape animations from `face_ict/`, and renders 1080x1080 PNG sequences at 30fps using EEVEE. Output goes to `face_renders_noaudio/`.
#### Conversion Scripts
- **`face_ict_to_arkit.py`**: Converts ICT-FaceKit blendshape parameters (55 blendshapes) to ARKit format (51 blendshapes). Merges certain blendshape pairs and removes unused indices.
- **`face_ict_to_vertices.py`**: Converts ICT blendshape parameters to vertex sequences using the blendshape basis matrix. Outputs per-frame vertex positions as numpy arrays with shape `(N, V*3)`, where coordinates are packed contiguously per vertex: `[v1x, v1y, v1z, v2x, v2y, v2z, ...]`.
#### Render Utilities
- **`render_add_audio.py`**: Combines rendered video with audio tracks. Supports both face renders (single actor) and body renders (mixed audio from both actors).
#### Data Files
- **`blendshape_ict.npy`**: ICT-FaceKit blendshape basis matrix used for converting blendshape parameters to vertex offsets, used in `face_ict_to_vertices.py`.
#### Job Files
We recommend using a job file and splitting the rendering into batches, as opposed to rendering all scenarios in one go.
- **`example_body_render_job.txt`**: Example job file listing scenes to render in body format (`<date>_<scenario_id>`).
- **`example_face_render_job.txt`**: Example job file listing scenes to render in face format (`<date>_<actor_id>_<scenario_id>`).
## Errata
- The face files for `20240126_006_034` is unavailable due to a conversion issue. When rendering the scene in `body_to_render.blend`, the female face blendshape animations are not applied.
## Acknowledgements
`body_to_render.blend` is based on the visualization Blender project kindly provided by the [BEAT dataset](https://pantomatrix.github.io/BEAT/) authors.
If you used InterAct as part of your research, please cite as following:
```bibtex
@article{ho2025interact,
title={InterAct: A Large-Scale Dataset of Dynamic, Expressive and Interactive Activities between Two People in Daily Scenarios},
author={Ho, Leo and Huang, Yinghao and Qin, Dafei and Shi, Mingyi and Tse, Wangpok and Liu, Wei and Yamagishi, Junichi and Komura, Taku},
journal={Proceedings of the ACM on Computer Graphics and Interactive Techniques},
volume={8},
number={4},
pages={1--27},
year={2025},
publisher={ACM New York, NY},
doi={10.1145/3747871}
}
``` |