2HANDS / README.md
sjauhri's picture
Added repo link
d327303 verified
metadata
license: mit
language:
  - en
tags:
  - computer-vision
  - robotics
  - robot-learning

Dataset Card for 2HANDS

2HANDS is the 2-Handed Affordance + Narration DataSet, consisting of a large number of unimanual and bimanual object affordance segmentation masks and task narrations as affordance class-labels.

Egocentric images and narrations/verb classes are derived from the EPIC-KITCHENS dataset and EPIC-VISOR annotations [1, 2].
[1] Damen, D. et al. (2018). Scaling egocentric vision: The epic-kitchens dataset. ECCV 2018
[2] Darkhalil, A. et al. (2022). Epic-kitchens visor benchmark: Video segmentations and object relations. NeurIPS 2022

Citation

You may cite our work as:
Heidinger, M.*, Jauhri, S.*, Prasad, V., & Chalvatzaki, G. (2025). 2handedafforder: Learning precise actionable bimanual affordances from human videos. ICCV 2025

BibTeX:
@misc{heidinger20252handedafforderlearningpreciseactionable, title={2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos}, author={Marvin Heidinger and Snehal Jauhri and Vignesh Prasad and Georgia Chalvatzaki}, year={2025}, eprint={2503.09320}, archivePrefix={arXiv}, primaryClass={cs.CV}, }