| | --- |
| | license: cc-by-4.0 |
| | --- |
| | |
| | # FusionX Multimodal Sample Dataset |
| |
|
| | This repository contains a sample multimodal dataset including: |
| |
|
| | - RGB camera frames (30 Hz, JPG) |
| | - Depth images (16-bit PNG in millimeter) |
| | - Tactile glove data (aligned per-frame in Parquet) |
| | - Per-recording metadata |
| |
|
| | ## Structure |
| |
|
| | To reduce API rate limits and improve upload stability, each recording is stored as a ZIP archive: |
| |
|
| | Each ZIP file contains a single recording directory structured as follows: |
| |
|
| | ``` |
| | recording_YYYYMMDD_HHMMSS/ |
| | ├── frames.parquet # Per-frame aligned metadata + tactile arrays |
| | ├── rgb/ # 30 Hz RGB camera frames (JPEG) |
| | │ ├── 000000.jpg |
| | │ ├── 000001.jpg |
| | │ └── ... |
| | └── depth/ # 16-bit depth images in millimeters (PNG) |
| | ├── 000000.png |
| | ├── 000001.png |
| | └── ... |
| | ``` |
| |
|
| | ## How to Use |
| |
|
| | 1. Download the repository (or use `huggingface_hub.snapshot_download`) |
| | 2. Unzip all files inside `zips/` |
| |
|
| | ## After Extraction |
| |
|
| | After unzipping all archives inside the `zips/` directory, you will obtain the original dataset structure: |
| |
|
| | ``` |
| | dataset/ |
| | ├── dataset_info.json |
| | └── recording_YYYYMMDD_HHMMSS/ |
| | ├── frames.parquet |
| | ├── rgb/ |
| | │ ├── 000000.jpg |
| | │ └── ... |
| | └── depth/ |
| | ├── 000000.png |
| | └── ... |
| | ``` |
| |
|
| | Each `recording_*` directory represents a single synchronized capture session containing: |
| | - Vision data (RGB + depth) |
| | - Tactile glove data |
| | - Frame-level alignment metadata |
| |
|
| | ## License |
| |
|
| | Creative Commons Attribution 4.0 (CC BY 4.0) |
| |
|
| | You are free to: |
| | - Use |
| | - Share |
| | - Modify |
| | - Use commercially |
| |
|
| | As long as attribution is provided. |