IndEgo / README.md
vivek9chavan's picture
Update README.md
5be3bab verified
|
raw
history blame
2.39 kB
metadata
license: cc-by-4.0
task_categories:
  - visual-question-answering
  - summarization
  - video-classification
  - any-to-any
language:
  - en
  - de
pretty_name: IndEgo
tags:
  - industrial
  - egocentric
  - procedural
  - collaborative work
  - mistake detection
  - VQA
  - video understanding
size_categories:
  - 10K<n<100K

IndEgo: A Dataset of Industrial Scenarios and Collaborative Work for Egocentric Assistants

Project Page: https://vivek9chavan.github.io/IndEgo/

Open in Colab

Abstract:

We introduce IndEgo, a multimodal egocentric and exocentric video dataset capturing common industrial tasks such as assembly/disassembly, logistics and organisation, inspection and repair, and woodworking.
The dataset includes 3,460 egocentric recordings (~197 hours) and 1,092 exocentric recordings (~97 hours).

image/png

A central focus of IndEgo is collaborative work, where two workers coordinate on cognitively and physically demanding tasks.
The egocentric recordings include rich multimodal data — eye gaze, narration, sound, motion, and semi-dense point clouds.

We provide:

  • Detailed annotations: actions, summaries, mistake labels, and narrations
  • Processed outputs: eye gaze, hand poses, SLAM-based semi-dense point clouds
  • Benchmarks: procedural/non-procedural task understanding, Mistake Detection, and reasoning-based Video QA

Baseline evaluations show that IndEgo presents a challenge for state-of-the-art multimodal models.

Acknowledgements: Meta Reality Labs for their support and open-science initiative with Project Aria.

If you use IndEgo, please cite our NeurIPS 2025 paper:

@inproceedings{Chavan2025IndEgo,
  author    = {Vivek Chavan and Yasmina Imgrund and Tung Dao and Sanwantri Bai and Bosong Wang and Ze Lu and Oliver Heimann and J{\"o}rg Kr{\"u}ger},
  title     = {IndEgo: A Dataset of Industrial Scenarios and Collaborative Work for Egocentric Assistants},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track},
  year      = {2025},
  url       = {https://neurips.cc/virtual/2025/poster/121501}
}