Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- multisensory
|
| 4 |
+
- robotics
|
| 5 |
+
- tactile
|
| 6 |
+
- audio
|
| 7 |
+
- rgb-d
|
| 8 |
+
- real-world
|
| 9 |
+
- object-centric
|
| 10 |
+
- cross-modal
|
| 11 |
+
size_categories:
|
| 12 |
+
- 10K<n<100K
|
| 13 |
+
license: mit
|
| 14 |
+
pretty_name: X-Capture
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# Dataset Card for **X-Capture**
|
| 18 |
+
The X-Capture dataset contains multisensory data collected from **600 real-world objects** in **nine in-the-wild environments**. We provide **RGB-D, acoustic, tactile,** and **3D data**. Each object has six recorded points each, covering diverse locations on the object.
|
| 19 |
+
|
| 20 |
+
### Dataset Description
|
| 21 |
+
|
| 22 |
+
- **Modality:** RGB, Depth, Tactile, Audio, 3D
|
| 23 |
+
- **Objects:** 600 real-world objects
|
| 24 |
+
- **Samples:** 3,600 (6 per object)
|
| 25 |
+
- **Environments:** 9 natural, real-world environments
|
| 26 |
+
- **Curated by:** Samuel Clarke, Suzannah Wistreich, Yanjie Ze, Jiajun Wu
|
| 27 |
+
- **License:** MIT
|
| 28 |
+
- **Paper:** https://arxiv.org/abs/2504.02318
|
| 29 |
+
- **Website:** https://x-capture.stanford.edu
|
| 30 |
+
- **Download:** (HF download link once uploaded)
|
| 31 |
+
|
| 32 |
+
---
|
| 33 |
+
## Direct Use
|
| 34 |
+
- Cross-sensory retrieval (audio→image, touch→3D, etc.)
|
| 35 |
+
- Multimodal representation learning
|
| 36 |
+
- Pretraining encoders across RGB-D / tactile / audio
|
| 37 |
+
- Object-centric perception
|
| 38 |
+
- Reconstruction (2D/3D) from X-modal signals
|
| 39 |
+
|
| 40 |
+
---
|
| 41 |
+
|
| 42 |
+
## Dataset Structure
|
| 43 |
+
|
| 44 |
+
Each object directory contains six capture points:
|
| 45 |
+
object_id/
|
| 46 |
+
point_id/
|
| 47 |
+
*_rgb.png
|
| 48 |
+
*_depth.png
|
| 49 |
+
*_10N_tactile.png
|
| 50 |
+
scp.mp4 # impact audio
|
| 51 |
+
- **rgb:** 640×480 color images
|
| 52 |
+
- **depth:** aligned depth images
|
| 53 |
+
- **tactile:** high-resolution taxel grid under 10N press
|
| 54 |
+
- **audio:** ~1–2s audio/video clip of impact sound
|
| 55 |
+
- **3D:** object mesh or reconstruction outputs (if included in HF release)
|
| 56 |
+
|
| 57 |
+
There are no train/val/test splits; users are encouraged to construct splits suited to their task.
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
## Citation
|
| 62 |
+
|
| 63 |
+
**BibTeX:**
|
| 64 |
+
```bibtex
|
| 65 |
+
@misc{clarke2025xcapture,
|
| 66 |
+
title={X-Capture: An Open-Source Portable Device for Multi-Sensory Learning},
|
| 67 |
+
author={Samuel Clarke and Suzannah Wistreich and Yanjie Ze and Jiajun Wu},
|
| 68 |
+
year={2025},
|
| 69 |
+
eprint={2504.02318},
|
| 70 |
+
archivePrefix={arXiv},
|
| 71 |
+
primaryClass={cs.CV},
|
| 72 |
+
url={https://arxiv.org/abs/2504.02318},
|
| 73 |
+
}
|