File size: 2,625 Bytes
aa99369
 
 
 
 
 
 
 
 
 
 
e59e084
aa99369
 
 
 
8c73646
 
 
 
 
 
912bc93
 
aa99369
 
b37090b
 
 
 
 
aa99369
 
 
 
 
 
 
 
 
5c790b7
6a7696f
aa99369
 
8c73646
aa99369
 
 
 
 
 
 
 
 
 
8c73646
aa99369
 
8c73646
 
 
aa99369
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
tags:
- multisensory
- robotics
- tactile
- audio
- rgb-d
- real-world
- object-centric
- cross-modal
size_categories:
- 1K<n<10K
license: mit
pretty_name: X-Capture
---

# X-Capture: An Open-Source Portable Device for Multi-Sensory Learning (ICCV 2025)
**Authors:** Samuel Clarke, Suzannah Wistreich, Yanjie Ze, Jiajun Wu  
Stanford University

[[Paper]](https://arxiv.org/abs/2504.02318) |  [[Project Page]](https://xcapture.github.io) |  [[Dataset Download]](https://huggingface.co/datasets/swistreich/XCapture/resolve/main/XCapture_data.zip)  

![X-Capture Overview](https://huggingface.co/datasets/swistreich/XCapture/resolve/main/header.png)

The X-Capture dataset contains multisensory data collected from **600 real-world objects** in **nine in-the-wild environments**. We provide **RGB-D, acoustic, tactile,** and **3D data**. Each object has six recorded points each, covering diverse locations on the object.

The dataset can be downloaded with:

```
wget https://huggingface.co/datasets/swistreich/XCapture/resolve/main/XCapture_data.zip -O XCapture_data.zip
```
### Dataset Description

- **Modality:** RGB, Depth, Tactile, Audio, 3D
- **Objects:** 600 real-world objects  
- **Samples:** 3,600 (6 per object)  
- **Environments:** 9 natural, real-world environments  
- **Curated by:** Samuel Clarke, Suzannah Wistreich, Yanjie Ze, Jiajun Wu  
- **License:** MIT  
- **Paper:** https://arxiv.org/abs/2504.02318  
- **Website:** https://xcapture.github.io
- **Download:** https://huggingface.co/datasets/swistreich/XCapture/resolve/main/XCapture_data.zip

---
## Usage
- Cross-sensory retrieval (audio→image, touch→3D, etc.)
- Multimodal representation learning  
- Pretraining encoders across RGB-D / tactile / audio  
- Object-centric perception  
- Reconstruction (2D/3D) from X-modal signals  

---

## Dataset Structure

Each object directory contains six capture points, each with:
- **rgb:** 640×480 color images  
- **depth:** aligned depth images  
- **tactile:** high-resolution DIGIT tactile images under 10N, 15N, 20N presses
- **audio:** ~3s audio/video clip of impact sound  
- **3D:** local object mesh at contact point

There are no train/val/test splits; users are encouraged to construct splits suited to their task.

---

## Citation

**BibTeX:**
```bibtex
@misc{clarke2025xcapture,
    title={X-Capture: An Open-Source Portable Device for Multi-Sensory Learning},
    author={Samuel Clarke and Suzannah Wistreich and Yanjie Ze and Jiajun Wu},
    year={2025},
    eprint={2504.02318},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2504.02318},
}