|
|
--- |
|
|
tags: |
|
|
- multisensory |
|
|
- robotics |
|
|
- tactile |
|
|
- audio |
|
|
- rgb-d |
|
|
- real-world |
|
|
- object-centric |
|
|
- cross-modal |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
license: mit |
|
|
pretty_name: X-Capture |
|
|
--- |
|
|
|
|
|
# X-Capture: An Open-Source Portable Device for Multi-Sensory Learning (ICCV 2025) |
|
|
**Authors:** Samuel Clarke, Suzannah Wistreich, Yanjie Ze, Jiajun Wu |
|
|
Stanford University |
|
|
|
|
|
[[Paper]](https://arxiv.org/abs/2504.02318) | [[Project Page]](https://xcapture.github.io) | [[Dataset Download]](https://huggingface.co/datasets/swistreich/XCapture/resolve/main/XCapture_data.zip) |
|
|
|
|
|
 |
|
|
|
|
|
The X-Capture dataset contains multisensory data collected from **600 real-world objects** in **nine in-the-wild environments**. We provide **RGB-D, acoustic, tactile,** and **3D data**. Each object has six recorded points each, covering diverse locations on the object. |
|
|
|
|
|
The dataset can be downloaded with: |
|
|
|
|
|
``` |
|
|
wget https://huggingface.co/datasets/swistreich/XCapture/resolve/main/XCapture_data.zip -O XCapture_data.zip |
|
|
``` |
|
|
### Dataset Description |
|
|
|
|
|
- **Modality:** RGB, Depth, Tactile, Audio, 3D |
|
|
- **Objects:** 600 real-world objects |
|
|
- **Samples:** 3,600 (6 per object) |
|
|
- **Environments:** 9 natural, real-world environments |
|
|
- **Curated by:** Samuel Clarke, Suzannah Wistreich, Yanjie Ze, Jiajun Wu |
|
|
- **License:** MIT |
|
|
- **Paper:** https://arxiv.org/abs/2504.02318 |
|
|
- **Website:** https://xcapture.github.io |
|
|
- **Download:** https://huggingface.co/datasets/swistreich/XCapture/resolve/main/XCapture_data.zip |
|
|
|
|
|
--- |
|
|
## Usage |
|
|
- Cross-sensory retrieval (audio→image, touch→3D, etc.) |
|
|
- Multimodal representation learning |
|
|
- Pretraining encoders across RGB-D / tactile / audio |
|
|
- Object-centric perception |
|
|
- Reconstruction (2D/3D) from X-modal signals |
|
|
|
|
|
--- |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
Each object directory contains six capture points, each with: |
|
|
- **rgb:** 640×480 color images |
|
|
- **depth:** aligned depth images |
|
|
- **tactile:** high-resolution DIGIT tactile images under 10N, 15N, 20N presses |
|
|
- **audio:** ~3s audio/video clip of impact sound |
|
|
- **3D:** local object mesh at contact point |
|
|
|
|
|
There are no train/val/test splits; users are encouraged to construct splits suited to their task. |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
**BibTeX:** |
|
|
```bibtex |
|
|
@misc{clarke2025xcapture, |
|
|
title={X-Capture: An Open-Source Portable Device for Multi-Sensory Learning}, |
|
|
author={Samuel Clarke and Suzannah Wistreich and Yanjie Ze and Jiajun Wu}, |
|
|
year={2025}, |
|
|
eprint={2504.02318}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2504.02318}, |
|
|
} |