| # EgoDynamic4D Dataset from AAAI 2026 paper: [Understanding Dynamic Scenes in Egocentric 4D Point Clouds](https://arxiv.org/abs/2508.07251) | |
| 🚀 **EgoDynamic4D dataset is coming soon.** | |
| This repository hosts the official release of **EgoDynamic4D**, a large-scale egocentric **4D dynamic scene understanding** benchmark introduced in our AAAI 2026 paper: | |
| > **Understanding Dynamic Scenes in Egocentric 4D Point Clouds** | |
| --- | |
| ## About the Dataset | |
| **EgoDynamic4D** is a question answering (QA) benchmark designed for fine-grained **spatio-temporal reasoning** in egocentric dynamic scenes. | |
| The dataset includes: | |
| * Egocentric RGB-D videos | |
| * Camera poses | |
| * Globally unique instance masks | |
| * 4D bounding boxes over time | |
| * Large-scale QA annotations for dynamic reasoning tasks | |
| The dataset is constructed based on existing egocentric 4D resources, building upon **ADT and THUD++**. Our main contribution lies in the **large-scale, task-driven QA annotations**, which enable fine-grained spatio-temporal reasoning in dynamic egocentric scenes. | |
| See also on [Github](https://github.com/Dancing-Github/EgoDynamic4D) | |