File size: 4,861 Bytes
9e6570a
 
 
 
 
 
 
 
 
 
 
 
5229ea4
 
c0283ff
5229ea4
 
 
 
 
 
 
 
 
c0283ff
 
 
 
 
 
 
 
5229ea4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c9dfc75
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
license: cc-by-4.0
tags:
- multimodal
- tactile-sensing
- vision
- TouchTronix
- Robotics
- FusionX
pretty_name: FusionX Multimodal Sample Data V2
---

# FusionX Multimodal Sample Dataset

This repository contains a multimodal dataset capturing synchronized vision and tactile glove data across distinct tasks.

## Dataset Overview

* **RGB Video:** 30 Hz 720p color stream (`.mp4`).
* **Depth Data:** Lossless 16-bit depth stream (`.mkv`) and raw 16-bit PNG frames.
* **Mono Vision:** Left and right monochrome camera views (raw PNG).
* **Tactile Data:** Per-frame aligned glove sensor data stored in high-performance `.parquet` format.
* **Calibration:** Intrinsic and extrinsic parameters provided in `.json`.

### Coordinate Systems & Alignment
To facilitate ease of use for 3D reconstruction and multimodal learning, the data alignment is defined as follows:

* **RGB & Depth:** The Depth stream is **software-aligned** to the RGB camera coordinate frame. A pixel at (u, v) in the depth map corresponds directly to the pixel (u, v) in the RGB frame.
* **Monochrome Stereo:** The Left and Right monochrome cameras operate in their **own independent coordinate frames**. 
* **Resolution Note:** While RGB/Depth are 720p, the Monochrome streams are captured at 640x400 resolution.


---

## Repository Structure

The dataset is organized by task. To optimize performance, raw image frames are compressed into modality-specific ZIP archives.

```text
dataset/
├── dataset_info.json            # Global dataset metadata
├── [task_name]/                 # e.g., box_cut1, drill1, waterpour3
│   ├── depth.mkv                # Post-processed depth video
│   ├── rgb.mp4                  # Post-processed RGB video
│   ├── preview_all.mp4          # Synced visualization (Grid view)
│   ├── preview_glove.mp4        # Tactile data visualization
│   ├── frames.parquet           # Main data index (Timestamps + Tactile)
│   ├── calib.json               # Calibration matrices
│   ├── video_meta.json          # Post-processed video specifications
│   ├── depth.zip                # Raw 16-bit PNG frames
│   ├── mono_left.zip            # Raw mono-left PNG frames
│   ├── mono_right.zip           # Raw mono-right PNG frames
│   └── rgb.zip                  # Raw RGB JPG frames
```

## Tactile Data Schema (`frames.parquet`)

The `frames.parquet` file is the central index for the dataset, providing per-frame synchronization between vision and tactile sensors.

### 1. Vision & Synchronization
| Column | Description |
| :--- | :--- |
| `frame_idx` | Integer index of the frame. |
| `timestamp` | Global synchronized UNIX timestamp. |
| `rgb_path`, `depth_path` | Relative paths to the original RGB and Depth images. |
| `mono_left_path`, `mono_right_path` | Relative paths to the monochrome stereo images. |

### 2. Camera IMU (OAK-D)
| Column | Description |
| :--- | :--- |
| `oak_imu_quaternion` | 4-element orientation vector (W, X, Y, Z). |
| `oak_imu_gyro` | Angular velocity (rad/s) for X, Y, Z axes. |
| `oak_imu_accel` | Linear acceleration (m/s²) for X, Y, Z axes. |

### 3. Tactile Glove Data (Left Hand: `lh_`, Right Hand: `rh_`)
Each hand contains independent timestamping and comprehensive pressure/bend sensing:

| Feature | Columns |
| :--- | :--- |
| **Glove Timing** | `_glove_timestamp`, `_glove_dt_ms` |
| **Raw Pressure** | `_thumb_pressure`, `_index_pressure`, `_middle_pressure`, `_ring_pressure`, `_little_pressure`, `_palm_pressure` |
| **Calibrated Pressure** | `_thumb_pressure_cal`, `_index_pressure_cal`, `_middle_pressure_cal`, `_ring_pressure_cal`, `_little_pressure_cal`, `_palm_pressure_cal` |
| **Finger Kinematics** | `_finger_bend` (Raw), `_finger_bend_cal` (Calibrated) |
| **Glove IMU** | `_imu_quaternion`, `_imu_gyro`, `_imu_accel` |

---

## How to Use

### 1. Download the Dataset
You can clone this repository using Git LFS or use the `huggingface_hub` library:

```python
from huggingface_hub import snapshot_download

# Download the specific task you need
snapshot_download(
    repo_id="touchtronix/FusionX-Multimodal-Sample-Data-V2",
    repo_type="dataset",
    allow_patterns="box_cut1/*" 
)
```
### 2. Accessing Raw Frames
If you require the original image files instead of the video streams, extract the modality zips within the task folder:

```bash
unzip box_cut1/rgb.zip -d box_cut1/rgb/
```
---

## License

**Creative Commons Attribution 4.0 (CC BY 4.0)** You are free to:
* **Use:** Incorporate the data into your own projects.
* **Share:** Copy and redistribute the material in any medium or format.
* **Modify:** Remix, transform, and build upon the material.
* **Commercial Use:** Use the data for commercial purposes.

*As long as appropriate attribution is provided to **Touchtronix Robotics**.*