# MTU3D Dataset [๐Ÿ“„ Paper (MTU3D)](https://www.arxiv.org/abs/2507.04047) | [๐Ÿงพ Project GitHub](https://github.com/MTU3D/MTU3D) The **MTU3D dataset** provides all the necessary data for reproducing the experiments in the [Move to Understand a 3D Scene: Bridging Visual Grounding and Exploration for Efficient and Versatile Embodied Navigation(ICCV25)](https://www.arxiv.org/abs/2507.04047), including **stage1 data for embodied segmentation training**, **feature saved from stage1**, **vle stage2 data** and **embodied_bench_data**. Specifically, we provide the correspondence between **\*.tar.gz** in this Dataset and **data.\*** in the config file: | .tar.gz | data.config | description | | --- | --- | --- | | embodied_base.tar.gz | data.embodied_base | stage1 data | | embodied_feat.tar.gz | data.embodied_feat | feature saved from stage1 | | embodied_vle.tar.gz | data.embodied_vle | vle stage2 data | The **embodied_bench_data.tar.gz** is used to change `data_set_path` and `navigation_data_path` in hm3d-online/*.nav.py. > ๐Ÿ“Œ The dataset is large and stored in split archives. Please **download all parts**, **merge**, and **extract** them before usage. --- ### Citation: ``` @article{zhu2025mtu, title = {Move to Understand a 3D Scene: Bridging Visual Grounding and Exploration for Efficient and Versatile Embodied Navigation}, author = {Zhu, Ziyu and Wang, Xilin and Li, Yixuan and Zhang, Zhuofan and Ma, Xiaojian and Chen, Yixin and Jia, Baoxiong and Liang, Wei and Yu, Qian and Deng, Zhidong and Huang, Siyuan and Li, Qing}, journal = {International Conference on Computer Vision (ICCV)}, year = {2025} } ``` --- license: cc-by-4.0 ---