| --- |
| license: cc-by-4.0 |
| task_categories: |
| - robotics |
| tags: |
| - robotics |
| - tactile |
| - manipulation |
| - multimodal |
| - gelsight |
| - realsense |
| - motion-capture |
| - dynamics |
| - world-model |
| - human-collected |
| pretty_name: React (Tactile-Visual Manipulation) |
| size_categories: |
| - 100K<n<1M |
| configs: |
| - config_name: episode_metadata |
| data_files: |
| - split: train |
| path: metadata/episodes.parquet |
| - config_name: motherboard |
| data_files: |
| - split: train |
| path: processed/mode1_v1/motherboard/**/episode_*.pt |
| - config_name: all |
| data_files: |
| - split: train |
| path: processed/mode1_v1/**/episode_*.pt |
| --- |
| |
| # React |
|
|
| Dense, contact-rich, synchronized multimodal interaction data collected from **human hands holding handheld GelSight tactile sensors — no robot arm involved**. Intended for **tactile-visual dynamics / world-model learning**, *not* a policy / demonstration dataset. |
|
|
|  |
|
|
| > **106 min of robot-free human-hand multimodal interaction · 190,231 frames @ 30 Hz across 3 × RGB-D + 2 × GelSight + 3-body OptiTrack** |
|
|
| ## What's different about this dataset |
|
|
| | | | |
| |---|---| |
| | **Robot-arm-free** | Recorded directly from a human operator holding two GelSight Mini sensors. No robot kinematics, no embodiment bias, no robot occluding the scene. | |
| | **Tactile + RGB-D + mocap, simultaneous** | Most manipulation datasets ship one of these. React ships all three, synchronized to a common 30 Hz clock. | |
| | **Contact-dense** | **64 % of post-trim frames** have confirmed tactile contact on at least one sensor — see [`figures/contact_intensity_full.png`](figures/contact_intensity_full.png). | |
| | **Long, continuous interaction** | Recordings are minutes long, not seconds. Median recording duration is 4 min; longest 19 min. Good for short-window sampling of dynamics, not for action-conditioned policy learning. | |
|
|
|  |
|
|
| ## At a glance |
|
|
| | | | |
| |---|---| |
| | Embodiment | **Human hands (no robot)** — handheld GelSight sensors with motion-capture rigid bodies | |
| | Intended use | Dynamics / world-model learning over short multimodal windows. Sample short trajectories (1 s – 10 s); recording-file boundaries are not action boundaries. | |
| | Total synchronized duration | **105.7 min** at 30 Hz (190,231 multimodal frames, post-trim) | |
| | Bimanual tactile-contact time | **64.3 % of post-trim frames** (3,302 contact events, median 0.73 s; see [`figures/dataset_figures/F2_contact_event_duration_histogram.png`](figures/dataset_figures/F2_contact_event_duration_histogram.png) and [`metadata/episodes.parquet`](metadata/episodes.parquet) for per-file numbers) | |
| | Cameras | 3× Intel RealSense D415 (color + depth), 480×640, 30 FPS | |
| | Tactile | 2× GelSight Mini (left, right), handheld | |
| | Motion capture | OptiTrack VRPN, 3 rigid bodies, ~120 Hz | |
| | Tasks | `motherboard` (more coming) | |
| | License | CC-BY-4.0 | |
|
|
| ## Recording sessions |
|
|
| | Date | Kind | Active sensors | Notes | |
| |---|---|---|---| |
| | 2026-05-10 | session | left + right | First full bimanual session. | |
| | 2026-05-11 | session | left + right | Largest session. A handful of GelSight LED-flicker frames + one mocap teleport; see [`bad_frames.json`](bad_frames.json). | |
|
|
| See [`tasks.json`](tasks.json) for the machine-readable registry (per-date `active_sensors`, etc.). |
|
|
| **OT-uninitialized prefixes trimmed.** Three episodes had OptiTrack offline at the start of recording (1–11 min each); those prefixes have been cut from the published `.pt` files (`_contact_meta.trim_offset` per file). Future recordings use an OT watchdog that refuses to start an episode unless mocap is streaming. Full story: [`docs/caveats.md`](docs/caveats.md). |
|
|
| ## Data quality |
|
|
| | Mode | Frames | % | Files | Cause | |
| |---|---:|---:|---:|---| |
| | GelSight LED flicker | 56 | 0.029 % | 5 | Single-frame LED dropout, recovers next frame | |
| | OptiTrack pose teleport | 56 | 0.029 % | 3 | Solver flip (translation > 5 m/s or angular > 15 rad/s) | |
| | OptiTrack track loss | 1,680 | 0.883 % | 6 | Marker briefly left mocap-volume / camera FOV mid-episode | |
| | **Total (union)** | **1,768** | **0.929 %** | **11** | | |
|
|
| Every flagged interval is in [`bad_frames.json`](bad_frames.json) keyed by `episode/episode_*` with TRIMMED-pt frame indices. A richer per-event view (with cross-modal motion + OT-gap + angular-velocity stats) lives in [`freeze_intervals.json`](freeze_intervals.json). Skip-list usage is shown below and in [`docs/quality.md`](docs/quality.md). Long start-of-episode OT-uninitialized prefixes (the dominant problem in the raw recordings) have already been trimmed from the published `.pt` files — see [`docs/caveats.md`](docs/caveats.md). |
|
|
| ## Quick start |
|
|
| ```python |
| # Load by task with `datasets` |
| from datasets import load_dataset |
| ds = load_dataset("yxma/React", "motherboard", split="train") |
| ``` |
|
|
| Or grab a single recording file directly: |
|
|
| ```python |
| import torch |
| from huggingface_hub import hf_hub_download |
| |
| path = hf_hub_download( |
| repo_id="yxma/React", repo_type="dataset", |
| filename="processed/mode1_v1/motherboard/2026-05-11/episode_003.pt", |
| ) |
| ep = torch.load(path, weights_only=False) |
| # ep["view"] (T, 3, 128, 128) uint8 — overhead cam |
| # ep["tactile_left"], ep["tactile_right"] (T, 3, 128, 128) uint8 |
| # ep["sensor_left_pose"], ep["sensor_right_pose"] |
| # (T, 7) float32 — xyz + quaternion |
| # ep["timestamps"] (T,) float64 |
| # Plus per-frame contact metrics: tactile_{side}_{intensity, area, mixed} |
| ``` |
|
|
| Sampling short windows for dynamics learning — **drop windows that overlap any flagged interval**: |
|
|
| ```python |
| import json |
| |
| with open("bad_frames.json") as f: |
| bad = json.load(f)["episodes"] # frame indices are TRIMMED-pt coordinates |
| |
| def is_clean_window(episode_key, t_start, t_end): |
| """True iff [t_start, t_end] doesn't intersect any flagged span.""" |
| bf = bad[episode_key] |
| intervals = (bf["intensity_spikes"] |
| + bf["pose_teleports_L"] + bf["pose_teleports_R"] |
| + bf["ot_loss_L"] + bf["ot_loss_R"]) |
| return all(not (s <= t_end and e >= t_start) for s, e in intervals) |
| ``` |
|
|
| Currently 1,768 / 190,231 frames (0.93 %) are flagged across 11 of 27 files — see [`docs/quality.md`](docs/quality.md) for the per-mode breakdown and more filtering recipes. The example dataloader below does this filtering for you when `skip_bad_frames=True`. |
|
|
| ## Example dataloader — short contact-rich windows |
|
|
| A reference PyTorch `Dataset` is shipped under [`examples/react_window_dataset.py`](examples/react_window_dataset.py). It scans the processed `.pt` files, applies the contact filter, drops windows that overlap [`bad_frames.json`](bad_frames.json), and respects the per-date `active_sensors` field from [`tasks.json`](tasks.json). |
|
|
| ```python |
| from examples.react_window_dataset import ReactWindowDataset |
| from torch.utils.data import DataLoader |
| |
| ds = ReactWindowDataset( |
| data_root="processed/mode1_v1/motherboard", |
| bad_frames_path="bad_frames.json", |
| tasks_json_path="tasks.json", |
| window_length=16, # frames per window |
| stride=1, # within-window stride (1 = consecutive) |
| window_step=16, # step between window starts (overlap control) |
| contact_metric="mixed", # "intensity" | "area" | "mixed" |
| tactile_threshold=0.4, |
| min_contact_fraction=0.6, # ≥ 60 % of window frames must have contact |
| which_sensors="any", # "any" | "both" | "left" | "right" |
| skip_bad_frames=True, |
| respect_active_sensors=True, |
| ) |
| print(len(ds), "windows") |
| loader = DataLoader(ds, batch_size=8, shuffle=True, num_workers=2) |
| ``` |
|
|
| With the defaults shown above, the dataset assembles **~9.2 k contact-rich 16-frame windows** across the 27 recordings. Each sample is a dict of `(T, …)` tensors plus metadata (`episode`, `frame_start`, `active_sensors`, …). |
|
|
| ### Example output |
|
|
| Four random windows, time runs left→right; each cell is `view | tactile_left | tactile_right` with sensor frame axes (X red, Y green, Z blue-ish) projected onto the view: |
|
|
|  |
|
|
| One window played frame-by-frame with the sensor-frame overlay: |
|
|
|  |
|
|
| Full demo script: [`examples/demo_react_window.py`](examples/demo_react_window.py). |
|
|
| ## Recording-file previews |
|
|
| Per-file previews live under [`figures/episode_previews/`](figures/episode_previews) as both `.gif` and `.mp4` (MP4s render inline on HF and are ~30× smaller). Each shows 60 frames evenly sampled across the episode in the recording-viewer layout: 3 RealSense cameras with projected GelSight axes, GelSight raw + diff thumbs, OptiTrack pose text panel. (The on-disk recording unit is called an "episode" purely for file naming — these boundaries don't carry semantic / action meaning for this dataset.) |
|
|
| ## Repository layout |
|
|
| ``` |
| README.md # this file |
| tasks.json # task / session registry |
| bad_frames.json # data-quality skip-list |
| processed/mode1_v1/<task>/<date>/episode_*.pt # per-file tensors |
| figures/ # previews + analysis figures |
| docs/ # extended documentation |
| ``` |
|
|
| ## More documentation |
|
|
| | File | Contents | |
| |---|---| |
| | [`docs/recording.md`](docs/recording.md) | Hardware setup, camera serials, sensor + mocap layout, robot-free collection method | |
| | [`docs/schema.md`](docs/schema.md) | Full `.pt` field reference and contact-metric definitions | |
| | [`docs/quality.md`](docs/quality.md) | Data-quality breakdown (per-mode), `bad_frames.json` schema, dataloader recipe, inspection figures | |
| | [`docs/figures.md`](docs/figures.md) | Dataset statistics + analysis gallery (F1–F8) | |
| | [`docs/caveats.md`](docs/caveats.md) | Known caveats and roadmap | |
|
|
| ## License |
|
|
| Released under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) (CC-BY-4.0). |
|
|
| ## Citation |
|
|
| If you use this dataset, please cite (TODO: add bibtex). |
|
|
|
|