LumosRobotics's picture
Update README.md
34bee4e verified
|
raw
history blame
4.87 kB
---
language:
- en
- zh
tags:
- robotics
- manipulation
- trajectory-data
- multimodal
- embodied-ai
multimodal: vision+language+action
license: other
task_categories:
- robotics
dataset_info:
features:
- name: rgb_images
dtype: image
description: Multi-view RGB images
- name: slam_poses
sequence: float32
description: SLAM pose trajectories
- name: vive_poses
sequence: float32
description: Vive tracking system poses
- name: point_clouds
sequence: float32
description: Time-of-Flight point cloud data
- name: clamp_data
sequence: float32
description: Clamp sensor readings
- name: merged_trajectory
sequence: float32
description: Fused trajectory data
configs:
- config_name: default
data_files: "**/*"
---
<div align="center">
<h1 style="font-size:44px; font-weight:900; margin-bottom:10px;">
FastUMI Pro – Multimodal Sample Dataset
</h1>
<h3 style="font-size:20px; font-weight:400; margin-top:-10px;">
Small-Scale Demonstration Data from the FastUMI Pro Multimodal Sensing System
<br>(Only Dozens of Trajectories — Full Dataset Available Upon Request)
</h3>
<br>
<img src="https://img.shields.io/badge/FastUMI-Pro-brightgreen" height="15"/>
<img src="https://img.shields.io/badge/Sample%20Dataset-Small-blue" height="15"/>
<img src="https://img.shields.io/badge/Multimodal-Vision%20%7C%20Pose%20%7C%20PointCloud-orange" height="15"/>
<br>
<a href="https://fastumi.com/pro/">Project Homepage</a>
</div>
---
## 📖 Overview
The **FastUMI Pro Sample Dataset** provides a public preview of the multimodal sensing capabilities of the FastUMI Pro data collection system.
This release contains **only dozens of sample trajectories** and is intended for:
- System testing
- Robotics and AI pipeline integration
- Preliminary algorithm development
- Demonstrating multimodal alignment and synchronization
Included sensor streams:
- RGB camera images
- Visual SLAM trajectory
- Vive tracking trajectory
- ToF point cloud frames
- Clamp (gripper width) sensor readings
- Fused multi-sensor trajectory
Full-scale datasets are available upon request for research or enterprise collaboration.
---
## 📁 Directory Structure
<div align="center">
<h3>Dataset Folder Layout</h3>
</div>
```markdown
session_001/
└── device_label_xv_serial/
└── session_timestamp/
├── RGB_Images/
│ ├── timestamps.csv
│ └── Frames/
│ ├── frame_000001.jpg
│ └── ...
├── SLAM_Poses/
│ └── slam_raw.txt
├── Vive_Poses/
│ └── vive_data_tum.txt
├── ToF_PointClouds/
│ ├── timestamps.csv
│ └── PointClouds/
│ └── pointcloud_000001.pcd
├── Clamp_Data/
│ └── clamp_data_tum.txt
└── Merged_Trajectory/
├── merged_trajectory.txt
└── merge_stats.txt
```
---
## 📊 Data Specifications
| **Data Type** | **Path** | **Shape** | **Type** | **Description** |
|--------------|----------|-----------|----------|-----------------|
| RGB Images | RGB_Images/Frames/*.jpg | (H, W, 3) | uint8 | Multi-view RGB images |
| SLAM Poses | SLAM_Poses/slam_raw.txt | (N, 8) | float | Timestamp + SE(3) pose |
| Vive Poses | Vive_Poses/vive_data_tum.txt | (N, 8) | float | Vive tracking system poses |
| ToF PointClouds | ToF_PointClouds/PointClouds/*.pcd | variable | pcd | Time-of-Flight point clouds |
| Clamp Data | Clamp_Data/clamp_data_tum.txt | (N, 2) | float | Timestamp + clamp width |
| Merged Trajectory | Merged_Trajectory/merged_trajectory.txt | (N, 8) | float | Fused multi-sensor pose |
---
## 🧭 Pose Data Format
All pose data (SLAM, Vive, fused) follow the same structure:
```markdown
timestamp x y z qx qy qz qw
```
| Field | Description |
|-------|-------------|
| timestamp | Unix timestamp |
| x | Position X (meters) |
| y | Position Y (meters) |
| z | Position Z (meters) |
| qx | Quaternion X component |
| qy | Quaternion Y component |
| qz | Quaternion Z component |
| qw | Quaternion W component |
---
## 📥 Download
```bash
huggingface-cli download FastUMIPro/example_data_fastumi_pro_raw \
--repo-type dataset \
--local-dir ./fastumi_sample/
```
Optional:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
---
## ⚠️ Dataset Scale Notice
This dataset contains **only a small number of sample episodes**
and is **not intended for large-scale training**.
For full multimodal datasets or enterprise collaborations, please contact the FastUMI team.
---
## 📞 Contact
Lead: **Ding Yan**
Email: **dingyan@lumosbot.tech**
WeChat: **Duke_dingyan**
---