LumosRobotics's picture
Update README.md
a01a7b3 verified
|
raw
history blame
6.05 kB
---
language:
- en
- zh
tags:
- robotics
- embodied-ai
- manipulation
- multimodal
- vla
- data-collection
license: other
task_categories:
- robotics
- imitation-learning
multimodal: vision+depth+trajectory+force
configs:
- config_name: default
data_files: "**/*"
---
<h1 align="center" style="font-size: 40px; font-weight: bold;">
FastUMI Pro™ Robotics Dataset
</h1>
<h3 align="center">
Enterprise-Grade Data Engine for Embodied AI
</h3>
<p align="center">
<img src="https://img.shields.io/badge/Product-FastUMI_Pro-brightgreen?style=flat"/>
<img src="https://img.shields.io/badge/Multimodal-7_Sensors-orange?style=flat"/>
<img src="https://img.shields.io/badge/Trajectory_Data-150K-blue?style=flat"/>
<img src="https://img.shields.io/badge/Application-VLA_Training-purple?style=flat"/>
</p>
---
## 📖 Overview FastUMI (Fast Universal Manipulation Interface) is a dataset and interface framework for general-purpose robotic manipulation tasks, designed to support hardware-agnostic, scalable, and efficient data collection and model training. The project provides: - Physical prototype systems - Complete data collection codebase - Standardized data formats and utilities - Tools for real-world manipulation learning research ## 🚀 Features ### FastUMI Pro Enhancements - ✅ **Higher precision trajectory data** - ✅ **Diverse embodiment support** for true "one-brain-multiple-forms" - ✅ **Enterprise-ready** pipeline and full-link data processing ### FastUMI-150K - ~150,000 real-world manipulation trajectories - Used by research partners for large-scale VLA (Vision-Language-Action) model training - Demonstrated significant multi-task generalization capabilities ##
📊 Model Performance
**VLA Model Results**: [TBD]
## 🛠️ Toolchain
| Tool | Description | Link |
|------|-------------|------|
| **Single-Arm Demo Replay** | Single-arm data replay code | [GitHub](https://github.com/Loki-Lu/FastUMI_replay_singleARM) |
| **Dual-Arm Demo Replay** | Dual-arm data replay code | [GitHub](https://github.com/Loki-Lu/FastUMI_replay_dualARM) |
| **Hardware SDK** | FastUMI hardware development kit | [GitHub](https://github.com/FastUMIRobotics/FastUMI_Hardware_SDK) |
| **Monitor Tool** | Real-time device monitoring | [GitHub](https://github.com/FastUMIRobotics/FastUMI_Monitor_Tool) |
| **Data Collection** | Data collection utilities | [GitHub](https://github.com/FastUMIRobotics/FastUMI_Data_Collection) |
### Research & Applications
- **Paper**: [MLM: Learning Multi-task Loco-Manipulation Whole-Body Control for Quadruped Robot with Arm](https://arxiv.org/abs/2508.10538)
- **Tutorial**: PI0 (FastUMI Data Lightweight Adaptation, Version V0) Full Pipeline
## 📥 Data Download
### Example Dataset
```bash
# Direct download (may be slow in some regions)
huggingface-cli download FastUMIPro/example_data_fastumi_pro_raw --repo-type dataset --local-dir ~/fastumi_data/
```
Mirror Download (Recommended)
```bash
# Set mirror endpoint
export HF_ENDPOINT=https://hf-mirror.com
```
# Download via mirror
huggingface-cli download --repo-type dataset --resume-download FastUMIPro/example_data_fastumi_pro_raw --local-dir ~/fastumi_data/
📁 Data Structure
Each session represents an independent operation "episode" containing observation data and action sequences.
```
Directory Structure
text
session_001/
└── device_label_xv_serial/
└── session_timestamp/
├── RGB_Images/
│ ├── timestamps.csv
│ └── Frames/
│ ├── frame_000001.jpg
│ └── ...
├── SLAM_Poses/
│ └── slam_raw.txt
├── Vive_Poses/
│ └── vive_data_tum.txt
├── ToF_PointClouds/
│ ├── timestamps.csv
│ └── PointClouds/
│ └── pointcloud_000001.pcd
├── Clamp_Data/
│ └── clamp_data_tum.txt
└── Merged_Trajectory/
├── merged_trajectory.txt
└── merge_stats.txt
```
## Data Specifications
| Data Type | Path | Shape| Type | Description |
| :--- | :--- | :--- | :--- | :--- |
| **RGB Images** | `session_XXX/RGB_Images/Video.MP4` | `(frames, 1080, 1920, 3)`| `uint8`| Camera video data, 60 FPS |
| **SLAM Poses** | `session_XXX/SLAM_Poses/slam_raw.txt` | `(timestamps, 7)`| `float` | UMI end-effector poses |
| **Vive Poses** | `session_XXX/Vive_Poses/vive_data_tum.txt` | `(timestamps, 7)`| `float` | Vive base station poses |
| **ToF PointClouds** | `session_XXX/PointClouds/pointcloud_...pcd` | `pcd format` | pcd | Time-of-Flight point cloud data |
| **Clamp Data** | `session_XXX/Clamp_Data/clamp_data_tum.txt` | `(timestamps, 1)`| `float` | Gripper spacing (mm) |
| **Merged Trajectory** | `session_XXX/Merged_Trajectory/merged_trajectory.txt` | `(timestamps, 7)`| `float` | Fused trajectory (Vive/UMI based on velocity) |
### Pose Data Format
All pose data (SLAM, Vive, Merged) follow the same format:
| Data | Description |
| :--- | :--- |
| **Timestamp** | Unix timestamp of the trajectory data |
| **Pos X** | X-coordinate of position (meters) |
| **Pos Y** | Y-coordinate of position (meters) |
| **Pos Z** | Z-coordinate of position (meters) |
| **Q_X** | X-component of orientation quaternion |
| **Q_Y** | Y-component of orientation quaternion |
| **Q_Z** | Z-component of orientation quaternion |
| **Q_W** | W-component of orientation quaternion |
## 🔄 Data Conversion
[TBD]
## 🤝 Collaboration
FastUMI Pro dataset is available for research collaboration. The full FastUMI-150K dataset has been provided to partner research teams for large-scale model training.
## 📞 Contact
***
> ### ☎️ 开发团队联系方式
>
> 对于任何问题或建议,请随时联系我们的开发团队。
> | **负责人 (Lead)** | Ding Yan |
> | :--- | :--- |
> | **Email** | **[dingyan@lumosbot.tech](mailto:dingyan@lumosbot.tech)** |
> | **WeChat** | **`Duke_dingyan`** |
***