Datasets:

ardamamur commited on
Commit
7b863d2
·
verified ·
1 Parent(s): a110ade

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -3
README.md CHANGED
@@ -1,3 +1,107 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - egocentric
5
+ - exotenric
6
+ - surgery
7
+ - or
8
+ - scene-graph
9
+ - activity-understanding
10
+ - gaze
11
+ - hand
12
+ ---
13
+ # EgoExOR-HQ: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding
14
+
15
+ [![Dataset](https://img.shields.io/badge/Data-4d5eff?style=for-the-badge&logo=huggingface&logoColor=ffc83d)](https://huggingface.co/datasets/TUM/EgoExOR)
16
+ [![Code](https://img.shields.io/badge/Code-000000?style=for-the-badge&logo=github&logoColor=white)](https://github.com/ardamamur/EgoExOR)
17
+ [![NeurIPS 2025](https://img.shields.io/badge/NeurIPS-2025-ff6b35?style=for-the-badge)](https://neurips.cc/)
18
+
19
+ **EgoExOR-HQ** — This repository hosts the **enriched high-quality release** of the EgoExOR dataset. For scene graph generation code, benchmarks, and pretrained models, see the [main EgoExOR repository](https://github.com/ardamamur/EgoExOR).
20
+
21
+ **Authors:** Ege Özsoy, Arda Mamur, Felix Tristram, Chantal Pellegrini, Magdalena Wysocki, Benjamin Busam, Nassir Navab
22
+
23
+ ## ✨ What's New in EgoExOR-HQ
24
+
25
+ This release adds:
26
+ - **High-quality images** — 1344×1344 resolution (instead of 336×336)
27
+ - **Raw depth images** — From external RGB-D cameras (instead of pre-merged point clouds), so you can build merged or per-camera point clouds for your use case
28
+ - **Per-device audios** — Separate audio streams per microphone
29
+
30
+ ## Overview
31
+
32
+ Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both.
33
+
34
+ We introduce **EgoExOR**, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures—*Ultrasound-Guided Needle Insertion* and *Minimally Invasive Spine Surgery*—EgoExOR integrates:
35
+
36
+ - **Egocentric:** RGB, gaze, hand tracking, audio from wearable glasses
37
+ - **Exocentric:** RGB and depth from RGB-D cameras, ultrasound imagery
38
+ - **Annotations:** 36 entities, 22 relations (568,235 triplets) for scene graph generation
39
+
40
+ This dataset sets a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception.
41
+
42
+ ## 🌟 Key Features
43
+
44
+ - **Multiple modalities** — RGB video, audio (full waveform + per-frame snippets, per-device), eye gaze, hand tracking, raw depth, and scene graph annotations
45
+ - **Time-synchronized streams** — All modalities aligned on a common timeline for precise cross-modal correlation
46
+ - **High-resolution RGB** — 1344×1344 frames for fine-grained visual analysis
47
+ - **Raw depth** — Build custom point clouds or depth-based models; depth from external RGB-D cameras only
48
+ - **Per-device audio** — Separate microphone streams for spatial or multi-channel audio processing
49
+
50
+ ## 📂 Dataset Structure
51
+
52
+ The dataset is distributed as **phase-level HDF5 files** for efficient download:
53
+
54
+ | File | Description |
55
+ |------|-------------|
56
+ | `miss_1.h5` | MISS procedure, phase 1 |
57
+ | `miss_2.h5` | MISS procedure, phase 2 |
58
+ | `miss_3.h5` | MISS procedure, phase 3 |
59
+ | `miss_4.h5` | MISS procedure, phase 4 |
60
+
61
+ To obtain a single merged file (including splits), use the merge utility from the [main EgoExOR repository](https://github.com/ardamamur/EgoExOR) (see `data/README.md`).
62
+
63
+ ### HDF5 Schema
64
+
65
+ ```
66
+ /metadata
67
+ /vocabulary/entity — Entity names and IDs (instruments, anatomy, etc.)
68
+ /vocabulary/relation — Relation names and IDs (holding, cutting, etc.)
69
+ /sources/sources — Camera/source names and IDs (head_surgeon, external_1, etc.)
70
+ /dataset — version, creation_date, title
71
+
72
+ /procedures/{procedure}/phases/{phase}/takes/{take}/
73
+ /sources — source_count, source_0, source_1, … (camera roles)
74
+ /frames/rgb — (num_frames, num_cameras, H, W, 3) uint8 — 1344×1344
75
+ /eye_gaze/coordinates — (num_frames, num_ego_cameras, 3) float32 — gaze 2D + camera ID
76
+ /eye_gaze_depth/values — (num_frames, num_ego_cameras) float32
77
+ /hand_tracking/positions — (num_frames, num_ego_cameras, 17) float32
78
+ /audio/waveform — Full stereo waveform
79
+ /audio/snippets — 1-second snippets aligned to frames
80
+ /audio/per_device/ — Per-microphone waveform and snippets
81
+ /point_cloud/depth/values — Raw depth images (external cameras; others zero-filled)
82
+ /point_cloud/merged/ — Not populated; use raw depth to build point clouds yourself
83
+ /annotations/ — Scene graph annotations (frame_idx, rel_annotations, scene_graph)
84
+
85
+ /splits
86
+ train, validation, test — Split tables (procedure, phase, take, frame_id)
87
+ ```
88
+
89
+ **Note:** Camera/source IDs in `eye_gaze/coordinates` map to `metadata/sources` for correct source names.
90
+
91
+ ## ⚙️ Efficiency and Usability
92
+
93
+ - **HDF5** — Hierarchical structure, partial loading, gzip compression
94
+ - **Chunking** — Efficient access to frame ranges for sequence-based training
95
+ - **Logical layout** — `procedures → phases → takes → modality` for easy navigation
96
+
97
+ ## 📜 License
98
+
99
+ Released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). Free for academic and commercial use with attribution.
100
+
101
+ ## 🔗 Related Resources
102
+
103
+ - **Original EgoExOR (v1)** — [ardamamur/EgoExOR](https://huggingface.co/datasets/ardamamur/EgoExOR) — 336×336 images, pre-merged point clouds, merged audio
104
+ - **Code, benchmarks, pretrained model** — [github.com/ardamamur/EgoExOR](https://github.com/ardamamur/EgoExOR)
105
+
106
+ ---
107
+ **Dataset:** [TUM/EgoExOR](https://huggingface.co/datasets/TUM/EgoExOR) · **Last Updated:** February 2025