Spaces:
Running
Running
| title: README | |
| emoji: 馃寲 | |
| colorFrom: red | |
| colorTo: blue | |
| sdk: static | |
| pinned: false | |
| # Human鈥揝cene Interaction Workshop | |
| Data, benchmarks, and demos for the Workshop on Human鈥揝cene Interaction Workshop. | |
| ## Challenge: Scene-Aware Referential Gesture Generation at ECCV26 | |
| Given speech, a 3D target coordinate, and a scene, generate SMPL-X pointing gestures that are temporally aligned, spatially accurate, and referentially grounded. Submissions are scored on three axes: temporal alignment, spatial accuracy, and referent recall. | |
| 馃搫 **[Challenge Paper](./hsi_challenge_release_v1.pdf)** 路 馃摝 **[Data & Baseline](https://huggingface.co/datasets/hsi-workshop/referential-gesture-challenge)** 路 馃幆 **[Interactive Demo](https://huggingface.co/spaces/mm-conv-scene-gesture/referential-gesture-challenge-demo)** | |
| ## Resources | |
| | Resource | Description | | |
| |----------|-------------| | |
| | [MM-Conv](https://huggingface.co/datasets/mm-conv-scene-gesture/demo-data) | ~2K pointing clips from naturalistic VR dialogue with 3D scene graphs | | |
| | [SGS-HSI](https://huggingface.co/datasets/mm-conv-scene-gesture/demo-data) | 1,138 synthetic single-target pointing clips | | |
| | OmniControl-PT baseline | Reference baseline (code & weights coming soon) | | |
| ## Timeline | |
| | Milestone | Date | | |
| |-----------|------| | |
| | Challenge opens | May 5, 2026 | | |
| | Submission deadline | July 7, 2026 | | |
| | Results announced | July 31, 2026 | | |
| | Workshop | October 2026 | | |
| ## Organizers | |
| Jonas Beskow (KTH) 路 Rishabh Dabral (MPI) 路 Anna Deichler (KTH) 路 Fethiye Irmak Do臒an (Cambridge) 路 Anindita Ghosh (MPI) 路 | |