Update README.md
Browse files
README.md
CHANGED
|
@@ -73,14 +73,14 @@ Merged_Trajectory: Trajectory data
|
|
| 73 |
|
| 74 |
### Data Specifications
|
| 75 |
|
| 76 |
-
Attributes
|
| 77 |
-
sim:
|
| 78 |
|
| 79 |
False: Real environment data
|
| 80 |
|
| 81 |
True: Simulation data
|
| 82 |
|
| 83 |
-
Observations
|
| 84 |
observations/images/: Camera image data
|
| 85 |
|
| 86 |
Default camera name: front
|
|
@@ -101,14 +101,14 @@ Meaning: Robot end-effector position + quaternion orientation
|
|
| 101 |
|
| 102 |
Order: [Pos X, Pos Y, Pos Z, Q_X, Q_Y, Q_Z, Q_W]
|
| 103 |
|
| 104 |
-
Actions
|
| 105 |
Type: Floating point dataset
|
| 106 |
|
| 107 |
Shape: (timesteps, 7)
|
| 108 |
|
| 109 |
Meaning: Actions (same structure as qpos, typically mirroring qpos)
|
| 110 |
|
| 111 |
-
Data Conversion
|
| 112 |
Supports one-click export to specific formats via web toolchain, or conversion between formats using tools like:
|
| 113 |
|
| 114 |
Any4lerobot: GitHub - Tavish9/any4lerobot
|
|
@@ -120,55 +120,3 @@ hdf5 → lerobot v3.0
|
|
| 120 |
hdf5 → lerobot(Pi0) v2.0
|
| 121 |
|
| 122 |
hdf5 → rlds
|
| 123 |
-
|
| 124 |
-
Model Performance
|
| 125 |
-
Preliminary experiments show that models trained on this dataset demonstrate significant multi-task generalization capabilities in universal manipulation tasks:
|
| 126 |
-
|
| 127 |
-
VLA Models: Including PI-O models with language understanding and action planning capabilities, exhibiting excellent generalization and execution stability in multi-task language-conditioned control
|
| 128 |
-
|
| 129 |
-
VA Models: Classical visual control architectures like ACT, DP also show significant improvements, particularly in complex operation sequences, viewpoint perturbations, and fine motion tracking with enhanced robustness
|
| 130 |
-
|
| 131 |
-
Related Links
|
| 132 |
-
Project Homepage: https://fastumi.com/pro/
|
| 133 |
-
|
| 134 |
-
FastUMI Project: https://fastumi.com
|
| 135 |
-
|
| 136 |
-
Hugging Face Dataset: https://huggingface.co/datasets/IPE...
|
| 137 |
-
|
| 138 |
-
Research Paper: [2409.19499] FastUMI: A Scalable and...
|
| 139 |
-
|
| 140 |
-
Open Source Toolchain:
|
| 141 |
-
|
| 142 |
-
Demo Replay: GitHub - Loki-Lu/FastUMI_replay_sin...
|
| 143 |
-
|
| 144 |
-
Dual-arm Demo: GitHub - Loki-Lu/FastUMI_replay_du...
|
| 145 |
-
|
| 146 |
-
Hardware SDK: GitHub - FastUMIRobotics/FastUMI_...
|
| 147 |
-
|
| 148 |
-
Monitoring Tools: GitHub - FastUMIRobotics/FastUMI_...
|
| 149 |
-
|
| 150 |
-
Data Collection Tools: GitHub - FastUMIRobotics/FastUMI_...
|
| 151 |
-
|
| 152 |
-
Related Research
|
| 153 |
-
[2508.10538] MLM: Learning Multi-ta...
|
| 154 |
-
|
| 155 |
-
PIO (FastUMI Lightweight Adaptation, Version V0) Full Tutorial: PIO (FastUMI数据轻量级适配,版本V0)…
|
| 156 |
-
|
| 157 |
-
Citation
|
| 158 |
-
If you use this dataset in your research, please cite the relevant papers:
|
| 159 |
-
|
| 160 |
-
bibtex
|
| 161 |
-
@article{fastumi2024,
|
| 162 |
-
title={FastUMI: A Scalable and Hardware-Agnostic Framework for Robot Manipulation Learning},
|
| 163 |
-
author={FastUMI Team},
|
| 164 |
-
journal={arXiv preprint},
|
| 165 |
-
year={2024}
|
| 166 |
-
}
|
| 167 |
-
Contact
|
| 168 |
-
For any questions or suggestions, please contact the development team:
|
| 169 |
-
|
| 170 |
-
Lead: [Name]
|
| 171 |
-
|
| 172 |
-
Email: [Email Address]
|
| 173 |
-
|
| 174 |
-
WeChat: [WeChat ID]
|
|
|
|
| 73 |
|
| 74 |
### Data Specifications
|
| 75 |
|
| 76 |
+
#### Attributes
|
| 77 |
+
##### sim:
|
| 78 |
|
| 79 |
False: Real environment data
|
| 80 |
|
| 81 |
True: Simulation data
|
| 82 |
|
| 83 |
+
##### Observations
|
| 84 |
observations/images/: Camera image data
|
| 85 |
|
| 86 |
Default camera name: front
|
|
|
|
| 101 |
|
| 102 |
Order: [Pos X, Pos Y, Pos Z, Q_X, Q_Y, Q_Z, Q_W]
|
| 103 |
|
| 104 |
+
##### Actions
|
| 105 |
Type: Floating point dataset
|
| 106 |
|
| 107 |
Shape: (timesteps, 7)
|
| 108 |
|
| 109 |
Meaning: Actions (same structure as qpos, typically mirroring qpos)
|
| 110 |
|
| 111 |
+
## Data Conversion
|
| 112 |
Supports one-click export to specific formats via web toolchain, or conversion between formats using tools like:
|
| 113 |
|
| 114 |
Any4lerobot: GitHub - Tavish9/any4lerobot
|
|
|
|
| 120 |
hdf5 → lerobot(Pi0) v2.0
|
| 121 |
|
| 122 |
hdf5 → rlds
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|