swistreich commited on
Commit
8c73646
·
verified ·
1 Parent(s): 6a7696f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -12
README.md CHANGED
@@ -14,7 +14,12 @@ license: mit
14
  pretty_name: X-Capture
15
  ---
16
 
17
- # Dataset Card for **X-Capture**
 
 
 
 
 
18
  The X-Capture dataset contains multisensory data collected from **600 real-world objects** in **nine in-the-wild environments**. We provide **RGB-D, acoustic, tactile,** and **3D data**. Each object has six recorded points each, covering diverse locations on the object.
19
 
20
  ### Dataset Description
@@ -30,7 +35,7 @@ The X-Capture dataset contains multisensory data collected from **600 real-world
30
  - **Download:** https://huggingface.co/datasets/swistreich/XCapture/resolve/main/XCapture_data.zip
31
 
32
  ---
33
- ## Direct Use
34
  - Cross-sensory retrieval (audio→image, touch→3D, etc.)
35
  - Multimodal representation learning
36
  - Pretraining encoders across RGB-D / tactile / audio
@@ -41,18 +46,12 @@ The X-Capture dataset contains multisensory data collected from **600 real-world
41
 
42
  ## Dataset Structure
43
 
44
- Each object directory contains six capture points:
45
- object_id/
46
- point_id/
47
- *_rgb.png
48
- *_depth.png
49
- *_10N_tactile.png
50
- scp.mp4 # impact audio
51
  - **rgb:** 640×480 color images
52
  - **depth:** aligned depth images
53
- - **tactile:** high-resolution taxel grid under 10N press
54
- - **audio:** ~1–2s audio/video clip of impact sound
55
- - **3D:** object mesh or reconstruction outputs (if included in HF release)
56
 
57
  There are no train/val/test splits; users are encouraged to construct splits suited to their task.
58
 
 
14
  pretty_name: X-Capture
15
  ---
16
 
17
+ # X-Capture: An Open-Source Portable Device for Multi-Sensory Learning (ICCV 2025)
18
+ **Authors:** Samuel Clarke, Suzannah Wistreich, Yanjie Ze, Jiajun Wu
19
+ Stanford University
20
+
21
+ [[Paper]](https://arxiv.org/abs/2504.02318) | [[Project Page]](https://xcapture.github.io) | [[Dataset Download]](https://huggingface.co/datasets/swistreich/XCapture/resolve/main/XCapture_data.zip)
22
+
23
  The X-Capture dataset contains multisensory data collected from **600 real-world objects** in **nine in-the-wild environments**. We provide **RGB-D, acoustic, tactile,** and **3D data**. Each object has six recorded points each, covering diverse locations on the object.
24
 
25
  ### Dataset Description
 
35
  - **Download:** https://huggingface.co/datasets/swistreich/XCapture/resolve/main/XCapture_data.zip
36
 
37
  ---
38
+ ## Usage
39
  - Cross-sensory retrieval (audio→image, touch→3D, etc.)
40
  - Multimodal representation learning
41
  - Pretraining encoders across RGB-D / tactile / audio
 
46
 
47
  ## Dataset Structure
48
 
49
+ Each object directory contains six capture points, each with:
 
 
 
 
 
 
50
  - **rgb:** 640×480 color images
51
  - **depth:** aligned depth images
52
+ - **tactile:** high-resolution DIGIT tactile images under 10N, 15N, 20N presses
53
+ - **audio:** ~3s audio/video clip of impact sound
54
+ - **3D:** local object mesh at contact point
55
 
56
  There are no train/val/test splits; users are encouraged to construct splits suited to their task.
57