Update README.md
Browse files
README.md
CHANGED
|
@@ -1,48 +1,70 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
-
FastUMI
|
| 10 |
|
| 11 |
-
|
| 12 |
-
- **FastUMI Project Homepage**: [https://fastumi.com](https://fastumi.com)
|
| 13 |
-
- **Hugging Face Dataset**: [https://huggingface.co/datasets/IPE...](https://huggingface.co/datasets/IPE...)
|
| 14 |
-
- **Research Paper**: [2409.19499] FastUMI: A Scalable and...
|
| 15 |
|
| 16 |
-
|
| 17 |
-
- **Demo Replay (Single Arm)**: [GitHub - Loki-Lu/FastUMI_replay_sin...](https://github.com/Loki-Lu/FastUMI_replay_sin...)
|
| 18 |
-
- **Demo Replay (Dual Arm)**: [GitHub - Loki-Lu/FastUMI_replay_du...](https://github.com/Loki-Lu/FastUMI_replay_du...)
|
| 19 |
-
- **Hardware SDK**: [GitHub - FastUMIRobotics/FastUMI_...](https://github.com/FastUMIRobotics/FastUMI_...)
|
| 20 |
-
- **Monitoring Tools**: [GitHub - FastUMIRobotics/FastUMI_...](https://github.com/FastUMIRobotics/FastUMI_...)
|
| 21 |
-
- **Data Collection Tools**: [GitHub - FastUMIRobotics/FastUMI_...](https://github.com/FastUMIRobotics/FastUMI_...)
|
| 22 |
|
| 23 |
-
|
| 24 |
-
- **[2508.10538] MLM: Learning Multi-ta...**
|
| 25 |
-
- **PIO (FastUMI Data Lightweight Adaptation, Version V0) Full Tutorial**: [PIO (FastUMI数据轻量级适配,版本V0)...]()
|
| 26 |
|
| 27 |
-
##
|
| 28 |
|
| 29 |
-
FastUMI Pro builds upon FastUMI
|
| 30 |
- Higher precision trajectory data
|
| 31 |
-
-
|
| 32 |
-
- Comprehensive leadership in
|
| 33 |
-
|
| 34 |
-
FastUMI previously open-sourced FastUMI-150K as the complete version, including approximately 150,000 real-world operation trajectories. This version was first provided to selected collaborative research teams for training large-scale VLA (Vision-Language-Action) models. Preliminary experiments show that models trained on this dataset demonstrate significant multi-task generalization capabilities in general manipulation tasks.
|
| 35 |
|
| 36 |
-
|
| 37 |
-
- VLA effects, including the PI-O model with language understanding and action planning capabilities, show excellent generalization ability and execution stability in multi-task language conditional control
|
| 38 |
-
- VA models, such as ACT, DP and other classic visual control architectures, have also achieved significant improvements on this data, especially showing stronger robustness in complex operation sequences, viewpoint disturbances, and fine motion tracking
|
| 39 |
|
| 40 |
-
##
|
| 41 |
|
| 42 |
-
###
|
| 43 |
|
| 44 |
-
**Sample Data Link**: [https://huggingface.co/datasets/FastUM...](https://huggingface.co/datasets/FastUM...)
|
| 45 |
-
|
| 46 |
-
#### Original Command (may be slow in some regions):
|
| 47 |
```bash
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- zh
|
| 5 |
+
tags:
|
| 6 |
+
- robotics
|
| 7 |
+
- manipulation
|
| 8 |
+
- vla
|
| 9 |
+
- trajectory-data
|
| 10 |
+
- multimodal
|
| 11 |
+
- vision-language-action
|
| 12 |
+
license: other
|
| 13 |
+
task_categories:
|
| 14 |
+
- robotics
|
| 15 |
+
- reinforcement-learning
|
| 16 |
+
- computer-vision
|
| 17 |
+
multimodal: vision+language+action
|
| 18 |
+
dataset_info:
|
| 19 |
+
features:
|
| 20 |
+
- name: rgb_images
|
| 21 |
+
dtype: image
|
| 22 |
+
description: Multi-view RGB images
|
| 23 |
+
- name: slam_poses
|
| 24 |
+
sequence: float32
|
| 25 |
+
description: SLAM pose trajectories
|
| 26 |
+
- name: vive_poses
|
| 27 |
+
sequence: float32
|
| 28 |
+
description: Vive tracking system poses
|
| 29 |
+
- name: point_clouds
|
| 30 |
+
sequence: float32
|
| 31 |
+
description: Time-of-Flight point cloud data
|
| 32 |
+
- name: clamp_data
|
| 33 |
+
sequence: float32
|
| 34 |
+
description: Clamp sensor readings
|
| 35 |
+
- name: merged_trajectory
|
| 36 |
+
sequence: float32
|
| 37 |
+
description: Fused trajectory data
|
| 38 |
+
configs:
|
| 39 |
+
- config_name: default
|
| 40 |
+
data_files: "**/*"
|
| 41 |
+
---
|
| 42 |
|
| 43 |
+
# FastUMI Pro Dataset
|
| 44 |
|
| 45 |
+
## Project Description
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
+
FastUMI Pro is the upgraded enterprise version of FastUMI, designed for streamlined, end-to-end data acquisition and transformation systems for corporate users.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
+
FastUMI (Fast Universal Manipulation Interface) is a dataset and interface framework for universal robot manipulation tasks, supporting hardware-agnostic, scalable, and efficient data collection and model training. The project provides physical prototype systems, complete data collection code, standardized data formats, and utility tools to facilitate real-world manipulation learning research.
|
|
|
|
|
|
|
| 50 |
|
| 51 |
+
## Dataset Overview
|
| 52 |
|
| 53 |
+
FastUMI Pro builds upon FastUMI with enhanced features:
|
| 54 |
- Higher precision trajectory data
|
| 55 |
+
- Support for more diverse robot embodiments, truly enabling "one-brain-multi-form" applications
|
| 56 |
+
- Comprehensive data leadership in the field
|
|
|
|
|
|
|
| 57 |
|
| 58 |
+
The original FastUMI open-sourced FastUMI-150K containing approximately 150,000 real-world manipulation trajectories, which was first provided to selected research partners for training large-scale VLA (Vision-Language-Action) models.
|
|
|
|
|
|
|
| 59 |
|
| 60 |
+
## Quick Start
|
| 61 |
|
| 62 |
+
### Download Example Data
|
| 63 |
|
|
|
|
|
|
|
|
|
|
| 64 |
```bash
|
| 65 |
+
# Original command (may be slow in some regions)
|
| 66 |
+
huggingface-cli download FastUMIPro/example_data_fastumi_pro_raw --repo-type dataset --local-dir ~/fastumi_data/
|
| 67 |
+
|
| 68 |
+
# Mirror acceleration solution
|
| 69 |
+
export HF_ENDPOINT=https://hf-mirror.com
|
| 70 |
+
huggingface-cli download --repo-type dataset --resume-download FastUMIPro/example_data_fastumi_pro_raw --local-dir ~/fastumi_data/
|