kim020306 commited on
Commit
1b1f090
·
verified ·
1 Parent(s): 77e1e67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -90
README.md CHANGED
@@ -72,13 +72,6 @@ This release contains **only dozens of sample trajectories** and is intended for
72
  - Preliminary algorithm development
73
  - Demonstrating multimodal alignment and synchronization
74
 
75
- Included sensor streams:
76
-
77
- - RGB camera images
78
- - Merged_Trajectory
79
- - ToF point cloud frames
80
- - Clamp (gripper width) sensor readings
81
- - Fused multi-sensor trajectory
82
 
83
  Full-scale datasets are available upon request for research or enterprise collaboration.
84
 
@@ -87,55 +80,16 @@ Full-scale datasets are available upon request for research or enterprise collab
87
 
88
  ## 📊 Data Specifications
89
 
 
 
90
  | **Data Type** | **Path** | **Shape** | **Type** | **Description** |
91
  |--------------|----------|-----------|----------|-----------------|
92
- | RGB Images | RGB_Images/Frames/*.jpg | (H, W, 3) | uint8 | Multi-view RGB images |
93
  | ToF PointClouds | ToF_PointClouds/PointClouds/*.pcd | variable | pcd | Time-of-Flight point clouds |
94
  | Clamp Data | Clamp_Data/clamp_data_tum.txt | (N, 2) | float | Timestamp + clamp width |
95
  | Merged Trajectory | Merged_Trajectory/merged_trajectory.txt | (N, 8) | float | Fused multi-sensor pose |
96
 
97
- ---
98
-
99
-
100
- ## 🧭 Pose Data Format
101
-
102
- All pose data (SLAM, Vive, fused) follow the same structure:
103
-
104
- ```markdown
105
- timestamp x y z qx qy qz qw
106
-
107
- ```
108
-
109
- | Field | Description | Field | Description |
110
- | :---: | :---: | :---: | :---: |
111
- | timestamp | Unix timestamp | qx | Quaternion X component |
112
- | x | Position X (meters) | qy | Quaternion Y component |
113
- | y | Position Y (meters) | qz | Quaternion Z component |
114
- | z | Position Z (meters) | qw | Quaternion W component |
115
-
116
- ### Coordinate System Reference
117
-
118
- To ensure correct visualization and control, please adhere to the following frame definition:
119
-
120
- * **Origin (0,0,0):** Geometric center of the tracking base stations (World Frame).
121
- * 🔴 **X-Axis:** Points **Right** (facing the robot workspace).
122
- * 🟢 **Y-Axis:** Points **Forward** (main manipulation direction).
123
- * 🔵 **Z-Axis:** Points **Upward** (aligned with gravity, opposing g).
124
-
125
- ### Data Structure
126
-
127
- * **Timestamp:** Unix timestamp (seconds).
128
- * **Position (x, y, z):** Translation vector in meters.
129
- * **Orientation (qx, qy, qz, qw):** Rotation quaternion (Scalar-last format).
130
-
131
-
132
- <div align="center">
133
- <small><i>Tip: If using ROS or specific simulators (like Isaac Gym), ensure you apply the necessary transform if your Z-axis convention differs.</i></small>
134
- </div>
135
-
136
- ---
137
 
138
- ## 🗂️ Dataset Structure
139
 
140
  **Purpose:** The dataset is organized by tasks and sessions, with each task containing multimodal sensor data streams captured during robotic manipulation episodes. Below is the hierarchical structure observed from the actual dataset:
141
 
@@ -143,7 +97,7 @@ To ensure correct visualization and control, please adhere to the following fram
143
  📂 dataset_root/
144
  ├── 🥤 pick_coke/ # [Task] Picking up a coke can
145
  │ ├── 📷 RGB_Images/
146
- │ │ └── 🎞️ Frames/ # Multi-view RGB image frames (.jpg)
147
  │ ├── ☁️ ToF_PointClouds/
148
  │ │ └── 🧊 PointClouds/ # Depth sensor point cloud data (.pcd)
149
  │ ├── 🔧 Clamp_Data/ # Gripper width measurements (.txt)
@@ -159,48 +113,48 @@ To ensure correct visualization and control, please adhere to the following fram
159
  └── 📁 session_folders/ # Individual recording sessions
160
  ```
161
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
162
  ---
163
 
164
- ## 📝 Directory & Data Details
165
-
166
- Below is the detailed specification for each data stream contained within a task folder.
167
-
168
- 1. `RGB_Images/Frames/`
169
- * **Description:** Stores multi-view RGB image frames captured during the task.
170
- * **Format:** `.jpg` (JPEG Image)
171
- * **Shape:** `(num_frames, Height, Width, 3)`
172
- * **Data Type:** `uint8`
173
- * **Color Space:** RGB
174
-
175
- 2. `ToF_PointClouds/PointClouds/`
176
- * **Description:** Dense 3D point cloud data generated by Time-of-Flight sensors.
177
- * **Format:** `.pcd` (Point Cloud Library format)
178
- * **Structure:** Unordered list of 3D points `(x, y, z)`.
179
- * **Unit:** Meters
180
-
181
- 3. `Merged_Trajectory/`
182
- * **Description:** Fused trajectory data combining SLAM, Vive tracking, and IMU inputs. Represents the 6-DoF pose of the end-effector.
183
- * **File:** `merged_trajectory.txt`
184
- * **Shape:** `(num_timesteps, 8)`
185
- * **Columns:**
186
- *
187
- ```text
188
- [Timestamp, Pos_X, Pos_Y, Pos_Z, Q_X, Q_Y, Q_Z, Q_W]
189
- ```
190
-
191
- * `Timestamp`: Unix timestamp (seconds)
192
- * `Pos`: Cartesian position (meters)
193
- * `Q`: Rotation quaternion (scalar-last `xyzw`)
194
-
195
- 4. `Clamp_Data/`
196
- * **Description:** Records the state of the end-effector (gripper) over time.
197
- * **File:** `clamp_data_tum.txt`
198
- * **Shape:** `(num_timesteps, 2)`
199
- * **Columns:**
200
- ```text
201
- [Timestamp, Width]
202
- ```
203
- * `Width`: Opening distance of the clamp (meters/units).
204
 
205
  ---
206
 
 
72
  - Preliminary algorithm development
73
  - Demonstrating multimodal alignment and synchronization
74
 
 
 
 
 
 
 
 
75
 
76
  Full-scale datasets are available upon request for research or enterprise collaboration.
77
 
 
80
 
81
  ## 📊 Data Specifications
82
 
83
+
84
+
85
  | **Data Type** | **Path** | **Shape** | **Type** | **Description** |
86
  |--------------|----------|-----------|----------|-----------------|
87
+ | RGB Images | RGB_Images/Frames/*.mp4| (H, W, 3) | uint8 | Multi-view RGB images |
88
  | ToF PointClouds | ToF_PointClouds/PointClouds/*.pcd | variable | pcd | Time-of-Flight point clouds |
89
  | Clamp Data | Clamp_Data/clamp_data_tum.txt | (N, 2) | float | Timestamp + clamp width |
90
  | Merged Trajectory | Merged_Trajectory/merged_trajectory.txt | (N, 8) | float | Fused multi-sensor pose |
91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
 
93
 
94
  **Purpose:** The dataset is organized by tasks and sessions, with each task containing multimodal sensor data streams captured during robotic manipulation episodes. Below is the hierarchical structure observed from the actual dataset:
95
 
 
97
  📂 dataset_root/
98
  ├── 🥤 pick_coke/ # [Task] Picking up a coke can
99
  │ ├── 📷 RGB_Images/
100
+ │ │ └── 🎞️ Frames/ # Multi-view RGB image frames (.mp4)
101
  │ ├── ☁️ ToF_PointClouds/
102
  │ │ └── 🧊 PointClouds/ # Depth sensor point cloud data (.pcd)
103
  │ ├── 🔧 Clamp_Data/ # Gripper width measurements (.txt)
 
113
  └── 📁 session_folders/ # Individual recording sessions
114
  ```
115
 
116
+ * Sensor Data Details:
117
+ * RGB_Images/video.mp4: A video file containing multi-view RGB footage of the manipulation task.
118
+ * ToF_PointClouds/: Contains dense 3D point cloud data (.pcd files) from Time-of-Flight sensors.
119
+ * Merged_Trajectory/merged_trajectory.txt: The primary 6-DoF end-effector trajectory file, fusing multiple sensor inputs.
120
+ * Clamp_Data/clamp_data_tum.txt: Records the gripper's opening width over time.
121
+ ---
122
+
123
+
124
+ ## 🧭 Data Formats
125
+
126
+ All pose data (SLAM, Vive, fused) follow the same structure:
127
+
128
+ ```markdown
129
+ timestamp x y z qx qy qz qw
130
+
131
+ ```
132
+
133
+ | Field | Description | Field | Description |
134
+ | :---: | :---: | :---: | :---: |
135
+ | timestamp | Unix timestamp | qx | Quaternion X component |
136
+ | x | Position X (meters) | qy | Quaternion Y component |
137
+ | y | Position Y (meters) | qz | Quaternion Z component |
138
+ | z | Position Z (meters) | qw | Quaternion W component |
139
+
140
  ---
141
 
142
+ ### Coordinate System
143
+
144
+ To ensure correct visualization and control, all pose data adheres to the following right-handed coordinate system (World Frame).
145
+ * **Origin (0,0,0):** Geometric center of the tracking base stations (World Frame).
146
+ * 🔴 **X-Axis:** Points Right (relative to the workspace).
147
+ * 🟢 **Y-Axis:** Points Forward (the primary direction of manipulation).
148
+ * 🔵 **Z-Axis:** Points Upward (opposite to the direction of gravity).
149
+ <div align="center">
150
+ <img src="https://huggingface.co/datasets/FastUMIPro/example_data_fastumi_pro_raw/resolve/main/1_2025-12-11_201019_400.jpg" alt="Coordinate System Visualization" width="50%" style="border-radius: 8px; margin-top: 15px;">
151
+ <br>
152
+ <small><i>Visual reference for the coordinate system.</i></small>
153
+ </div>
154
+ <div align="center" style="margin-top: 15px;">
155
+ <small><i>Tip: When using simulation environments like ROS or Isaac Gym, ensure your coordinate frame conventions match. You may need to apply a transformation if your framework uses a different "up" axis (e.g., Z-up vs. Y-up).</i></small>
156
+ </div>
157
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
158
 
159
  ---
160