Datasets:
add dataset card
Browse files
README.md
CHANGED
|
@@ -1,3 +1,102 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
## Dataset Description:
|
| 6 |
+
|
| 7 |
+
This dataset is a multimodal collection of trajectories generated in Isaac Lab on the ``Mug in Drawer`` task defined in ``mindmap``.
|
| 8 |
+
|
| 9 |
+
The task was created to evaluate robot manipulation policies on their spatial memory capabilities.
|
| 10 |
+
With this (partial) dataset you can generate the full dataset used for [mindmap model](https://huggingface.co/nvidia/PhysicalAI-Robotics-mindmap-Checkpoints/tree/main) training,
|
| 11 |
+
run a ``mindmap`` training or evaluate ``mindmap`` open/closed loop.
|
| 12 |
+
|
| 13 |
+
This dataset is for research and development only.
|
| 14 |
+
|
| 15 |
+
## Dataset Owner(s):
|
| 16 |
+
|
| 17 |
+
NVIDIA Corporation
|
| 18 |
+
|
| 19 |
+
## Dataset Creation Date:
|
| 20 |
+
|
| 21 |
+
09/26/2025
|
| 22 |
+
|
| 23 |
+
## License/Terms of Use:
|
| 24 |
+
|
| 25 |
+
This dataset is governed by the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode)
|
| 26 |
+
|
| 27 |
+
## Intended Usage:
|
| 28 |
+
|
| 29 |
+
This dataset is intended for:
|
| 30 |
+
Research in robot manipulation policies using imitation learning
|
| 31 |
+
Training and evaluating robotic manipulation policies on tasks requiring spatial memory
|
| 32 |
+
Training and evaluating the [mindmap model](https://huggingface.co/nvidia/PhysicalAI-Robotics-mindmap-Checkpoints) using the [mindmap codebase](https://github.com/NVlabs/nvblox_mindmap)
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
## Dataset Characterization:
|
| 36 |
+
|
| 37 |
+
### Data Collection Method
|
| 38 |
+
* Synthetic
|
| 39 |
+
* Human teleoperation
|
| 40 |
+
* Automatic trajectory generation
|
| 41 |
+
|
| 42 |
+
15 human teleoperated demonstrations are collected with a SpaceMouse in Isaac Lab.
|
| 43 |
+
From these 15 demonstrations, a total of 250 demos are generated automatically using a synthetic motion trajectory generation framework,
|
| 44 |
+
[Isaac Lab Mimic](https://isaac-sim.github.io/IsaacLab/v2.0.1/source/overview/teleop_imitation.html).
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
## Labeling Method
|
| 48 |
+
|
| 49 |
+
Not Applicable
|
| 50 |
+
|
| 51 |
+
## Dataset Format
|
| 52 |
+
|
| 53 |
+
We provide the Mimic-generated 250 demonstrations in an HDF5 dataset file, and ``mindmap``-formatted datasets for 10 demonstrations converted from the HDF5 file.
|
| 54 |
+
|
| 55 |
+
Due to storage limitations, we only provide 10 demonstrations in the ``mindmap``-formatted datasets.
|
| 56 |
+
If you want to generate the full 250 demos, refer to the ``mindmap`` [data generation docs](https://nvlabs.github.io/nvblox_mindmap/pages/data_generation.html).
|
| 57 |
+
|
| 58 |
+
The ``mindmap`` dataset consists of tarred demonstration folders. After untarring, the structure is as follows:
|
| 59 |
+
|
| 60 |
+
```
|
| 61 |
+
π <DATASET_NAME>/
|
| 62 |
+
βββ π demo_00000/
|
| 63 |
+
β βββ 00000.<CAMERA_NAME>_depth.png
|
| 64 |
+
β βββ 00000.<CAMERA_NAME>_intrinsics.npy
|
| 65 |
+
β βββ 00000.<CAMERA_NAME>_pose.npy
|
| 66 |
+
β βββ 00000.<CAMERA_NAME>_rgb.png
|
| 67 |
+
β βββ 00000.nvblox_vertex_features.zst
|
| 68 |
+
β βββ 00000.robot_state.npy
|
| 69 |
+
β βββ ...
|
| 70 |
+
β βββ <NUM_STEPS_IN_DEMO>.<CAMERA_NAME>_depth.png
|
| 71 |
+
β βββ ...
|
| 72 |
+
β βββ <NUM_STEPS_IN_DEMO>.robot_state.npy
|
| 73 |
+
β βββ demo_successful.npy
|
| 74 |
+
βββ π demo_00001/
|
| 75 |
+
β βββ ...
|
| 76 |
+
βββ π demo_00009>/
|
| 77 |
+
βββ dataset_name.hdf5
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
Each demonstration consists of a variable NUM_STEPS with multimodal data:
|
| 81 |
+
- RGB-D frame including corresponding camera pose and intrinsics
|
| 82 |
+
- Metric-Semantic Reconstruction represented as featurized pointcloud in nvblox_vertex_features.zst
|
| 83 |
+
- Robot state including end-effector pose, gripper closedness and head yaw orientation
|
| 84 |
+
|
| 85 |
+
## Dataset Quantification
|
| 86 |
+
|
| 87 |
+
Record Count: 250 demonstrations/trajectories
|
| 88 |
+
|
| 89 |
+
Total Storage: 74 GB
|
| 90 |
+
|
| 91 |
+
## Reference(s):
|
| 92 |
+
|
| 93 |
+
- ``mindmap`` paper:
|
| 94 |
+
- Remo Steiner, Alexander Millane, David Tingdahl, Clemens Volk, Vikram Ramasamy, Xinjie Yao, Peter Du, Soha Pouya and Shiwei Sheng. "**mindmap: Spatial Memory in Deep Feature Maps for 3D Action Policies**". CoRL 2025 Workshop RemembeRL.
|
| 95 |
+
[arXiv preprint arXiv:2509.20297 (2025).](https://arxiv.org/abs/2509.20297)
|
| 96 |
+
- ``mindmap`` codebase:
|
| 97 |
+
- [github.com/NVlabs/nvblox_mindmap](https://github.com/NVlabs/nvblox_mindmap)
|
| 98 |
+
|
| 99 |
+
## Ethical Considerations:
|
| 100 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
| 101 |
+
|
| 102 |
+
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|