IndEgo / README.md
vivek9chavan's picture
Update README.md
a5a5b38 verified
metadata
license: cc-by-4.0
task_categories:
  - visual-question-answering
  - summarization
  - video-classification
  - any-to-any
language:
  - en
  - de
pretty_name: IndEgo
tags:
  - industrial
  - egocentric
  - procedural
  - collaborative work
  - mistake detection
  - VQA
  - video understanding
size_categories:
  - 10K<n<100K

IndEgo: A Dataset of Industrial Scenarios and Collaborative Work for Egocentric Assistants

Vivek Chavan¹²*, Yasmina Imgrund²†, Tung Dao²†, Sanwantri Bai³†, Bosong Wang⁴†, Ze Lu⁵†, Oliver Heimann¹, Jörg Krüger¹²

¹Fraunhofer IPK, Berlin    ²Technical University of Berlin    ³University of Tübingen
⁴RWTH Aachen University    ⁵Leibniz University Hannover

*Project Lead     †Work done during student theses/projects at Fraunhofer IPK, Berlin.

NeurIPS Logo Published at NeurIPS 2025

Project Website Paper PDF Code NeurIPS Page

Open In Colab


🚧 UPDATE IN PROGRESS 🚧

⚠️ Based on the feedback from other community members, the dataset structure is being reorganised.

File paths and folder names are changing.

If you download the data right now, your local file structure may become inconsistent with future updates. We recommend waiting until the restructuring is complete (ETA: 12 Dec, 2025).

👉 Click here to be notified when the dataset is ready

📖 Abstract

We introduce IndEgo, a multimodal egocentric and exocentric video dataset capturing common industrial tasks such as assembly/disassembly, logistics and organisation, inspection and repair, and woodworking. The dataset includes 3,460 egocentric recordings (~197 hours) and 1,092 exocentric recordings (~97 hours).

Dataset Overview

A central focus of IndEgo is collaborative work, where two workers coordinate on cognitively and physically demanding tasks. The egocentric recordings include rich multimodal data — eye gaze, narration, sound, motion, and semi-dense point clouds.

We provide:

  • Detailed annotations: actions, summaries, mistake labels, and narrations
  • Processed outputs: eye gaze, hand poses, SLAM-based semi-dense point clouds
  • Benchmarks: procedural/non-procedural task understanding, collaborative tasks, Mistake Detection, and reasoning-based Video QA

Baseline evaluations show that IndEgo presents a challenge for state-of-the-art multimodal models.


🧩 Citation

If you use IndEgo in your research, please cite our NeurIPS 2025 paper:

@inproceedings{Chavan2025IndEgo,
  author    = {Vivek Chavan and Yasmina Imgrund and Tung Dao and Sanwantri Bai and Bosong Wang and Ze Lu and Oliver Heimann and J{\"o}rg Kr{\"u}ger},
  title     = {IndEgo: A Dataset of Industrial Scenarios and Collaborative Work for Egocentric Assistants},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track},
  year      = {2025},
  url       = {https://neurips.cc/virtual/2025/poster/121501}
}

Acknowledgments & Funding

This work is supported by the German Federal Ministry of Research, Technology and Space (BMFTR) and the German Aerospace Center (DLR) under the KIKERP project (Grant No. 16IS23055C) within the KI4KMU program. We are grateful to the Meta AI and Reality Labs teams for the Project Aria initiative, including the research kit, associated tools, and services. We also thank Hugging Face for providing a public-dataset storage grant that enables large-scale hosting and community access to the IndEgo dataset. Data collection was conducted at the research labs and test field of the Institute of Machine Tools and Factory Management (IWF), TU Berlin. Finally, we extend our sincere thanks to all student volunteers and workers who contributed to the data collection.