React / README.md
yxma's picture
Replace per-episode Gantt timelines with intensity waveform figures (full + 20-min)
a32dd4d verified
metadata
license: cc-by-4.0
task_categories:
  - robotics
tags:
  - robotics
  - tactile
  - manipulation
  - multimodal
  - gelsight
  - realsense
  - motion-capture
pretty_name: React (Tactile-Visual Manipulation)
size_categories:
  - 10K<n<100K
configs:
  - config_name: motherboard
    data_files:
      - split: train
        path: processed/mode1_v1/motherboard/**/episode_*.pt
  - config_name: all
    data_files:
      - split: train
        path: processed/mode1_v1/**/episode_*.pt

React

Multimodal manipulation recordings from a bimanual setup with vision-based tactile sensors and motion capture.

Tactile intensity timeline

30 episodes · 138.4 min total · 87.9 min (66%) of confirmed bimanual tactile contact · 3× RGB-D cameras + 2× GelSight + 3-body OptiTrack.

At a glance

Tasks motherboard (more coming)
Episodes 30
Total duration 138.4 min (median 4 min/episode, longest 19 min)
Tactile contact 87.9 min / 66% of frames (4,136 contact events, median 0.73 s each)
Cameras 3× Intel RealSense D415 (color + depth, 480×640, 30 FPS)
Tactile 2× GelSight Mini (left / right, 480×640, ~25 FPS)
Motion capture OptiTrack VRPN, 3 rigid bodies, ~120 Hz
License CC-BY-4.0

Comparison table

Recording setup

Stream Hardware Native shape Rate
3× RealSense color Intel D415 (serials 143322063538, 104122062574, 217222066989) 480×640×3 uint8 (BGR) 30 FPS
3× RealSense depth same 480×640 uint16 (mm) 30 FPS
2× GelSight tactile GelSight Mini (left / right) 480×640×3 uint8 ~25 FPS, resampled to camera ticks
3× OptiTrack rigid bodies motherboard, sensor_left, sensor_right 7-vector (x, y, z, qx, qy, qz, qw) ~120 Hz

All streams share a common monotonic timestamp axis recorded under timestamps. Per-tracker OptiTrack streams carry their own higher-rate timestamps under optitrack/<body>/timestamps.

Tasks

The dataset is organized task-first so new tasks can be added without renaming or recompute.

Task Description Dates Episodes
motherboard Bimanual manipulation of components on a computer motherboard 2026-03-23, 2026-05-10, 2026-05-11 30

tasks.json at the repo root is the source of truth for the task registry.

Episode previews

Each figures/episode_previews/motherboard/<date>/episode_NNN.gif shows the first 2 minutes of that episode at 15× speed (≈8 s loop, 3-panel layout: overhead camera | tactile left | tactile right). Browse them in the figures/episode_previews folder of this repo.

Statistics & analysis

Episode length

Episode length

Most episodes run 1–10 min; the median is 4 min — roughly 8× longer than BridgeData V2's typical 30 s demo. The longest episode is 19 min.

Contact event durations

Contact event durations

4,136 contact events total. The typical contact event lasts ≈ 0.7 s (median), with a long tail out to 33 s — useful for grasp/contact-classification downstream tasks.

Where on the gel does contact land?

Contact heatmap

Both sensors show contact concentrated in the central ~50% of the gel surface, dropping off toward the edges. The left gel has discrete bright spots from repeated contacts with specific features.

Tactile signal is real and varied (not flat noise)

Tactile montage

16 random contact frames sampled across the dataset — discrete pins, edges, smooth surfaces, multi-object contact.

Bimanual workspace

Pose trajectory

Multi-view projection of the longest episode (2026-05-11 / ep_017, 19 min). Left (blue) and right (orange) sensors operate over a ~30 × 40 × 30 cm workspace.

Tactile is independent of motion

Cross-modal correlation

Sensor velocity vs tactile intensity is essentially uncorrelated (r ≈ +0.04 / −0.05). Tactile carries information that is not explained by pose+velocity — a direct argument for the value of including tactile in policy / world-model training.

Per-episode summary

Per-episode summary

Detailed table also exported as CSV: figures/dataset_figures/F8_per_episode_summary.csv.

Repository layout

tasks.json                                          # registry
processed/
└─ mode1_v1/
    └─ <task>/
        └─ <date>/
            ├─ episode_000.pt
            ├─ episode_000.contact.json
            └─ ...
figures/
├─ contact_intensity_full.png          # tactile intensity over full dataset (waveform view)
├─ contact_intensity_20min.png         # 20-min zoom of the same
├─ episode_previews/<task>/<date>/episode_*.gif   # per-episode GIF previews
└─ dataset_figures/                    # F1–F8 statistics and analysis figures

The processed/mode1_v1/ view is a task-specific slice of the underlying raw recordings, not the full sensor suite. It was produced by twm/preprocess.py + twm/contact_index.py from a private raw HDF5 mirror.

processed/mode1_v1/ schema

Each episode_*.pt is a Python dict loadable with torch.load(..., weights_only=False).

Key Shape dtype Description
view (T, 3, 128, 128) uint8 Overhead camera (realsense/cam0/color), center-cropped to square then bilinear-resized to 128×128
tactile_left (T, 3, 128, 128) uint8 Left GelSight, same crop/resize
tactile_right (T, 3, 128, 128) uint8 Right GelSight, same crop/resize
timestamps (T,) float64 Camera timestamps (seconds, monotonic clock)
sensor_left_pose (T, 7) float32 Left GelSight rigid body OptiTrack pose, nearest-neighbor aligned to camera timestamps
sensor_right_pose (T, 7) float32 Right GelSight rigid body OptiTrack pose, same alignment
tactile_{left,right}_intensity (T,) float32 Per-frame mean per-pixel L2 distance from a contact-free reference frame
tactile_{left,right}_area (T,) float32 Per-frame fraction of pixels with L2 diff > tau
tactile_{left,right}_mixed (T,) float32 Mean of (diff × mask), captures intensity restricted to contact pixels
_contact_meta dict Per-episode contact metadata: tau, drift between first/p01 reference frames, p01 reference indices, the chosen reference RGB frames, etc.

Each .contact.json is a small summary of the metric distributions plus drift diagnostics, intended for filtering / sanity checking without loading the full tensors.

Contact metric definition

For each tactile sensor independently:

  1. Pick a contact-free reference frame: the ~0.1th-percentile-quietest frame by mean L2 distance to the temporal median (reference_strategy = "p01").
  2. For each frame t, compute per-pixel diff[t, x, y] = || frame[t, :, x, y] − ref[:, x, y] ||_2 (RGB L2 over channels).
  3. Then:
    • intensity[t] = mean(diff[t])
    • area[t] = mean(diff[t] > tau) (default tau = 8.0 on the uint8 scale)
    • mixed[t] = mean(diff[t] * (diff[t] > tau))

_contact_meta["drift_warning"] is True if either sensor's drift (L2 distance between the first frame and the p01-reference frame) exceeds 2·tau; in this release no episode triggers it.

Quick start

Load one task via datasets:

from datasets import load_dataset
ds = load_dataset("yxma/React", "motherboard", split="train")
# Each row is one .pt file path; the actual tensors live inside.

Or load a single episode directly:

import torch
from huggingface_hub import hf_hub_download

path = hf_hub_download(
    repo_id="yxma/React",
    repo_type="dataset",
    filename="processed/mode1_v1/motherboard/2026-05-11/episode_003.pt",
)
ep = torch.load(path, weights_only=False)
print(ep["view"].shape, ep["tactile_left_intensity"].shape)
# torch.Size([10032, 3, 128, 128]) torch.Size([10032])

Load contact metadata only (much smaller — useful for filtering):

import json
path = hf_hub_download(
    repo_id="yxma/React",
    repo_type="dataset",
    filename="processed/mode1_v1/motherboard/2026-05-11/episode_003.contact.json",
)
meta = json.load(open(path))
print(meta["drift_left"], meta["drift_right"], meta["drift_warning"])

Known caveats

  • Missing / dropped episodes on motherboard/2026-05-11:
    • episode_000 and episode_002 — short test recordings (8.8 s and 10.4 s) with no tactile contact on either sensor; intentionally excluded.
    • episode_001 — lost at recording time (HDF5 superblock never finalized when the writer was killed mid-write); intentionally absent.
    • The remaining episode IDs are non-contiguous as a result. Don't infer ordering from filename gaps.
  • Lossy resize: the 128×128 view and tactile fields are downsampled from native 480×640. Native resolution is not preserved in this release.
  • Single camera: only realsense/cam0/color is included. The other two RealSense views and all depth streams are not in processed/mode1_v1/.
  • OptiTrack alignment: the per-step poses are nearest-neighbor matched to camera ticks. The full ~120 Hz pose streams are not preserved here.
  • Mode is opinionated: contact metrics depend on the chosen tau and the p01 reference strategy. If you want a different tau, re-deriving from raw is necessary.

Roadmap

  • More tasks — registry in tasks.json will grow.
  • LeRobot-format full-fidelity variant (lerobot/v1.0/) is planned. It will include all three RealSense color and depth streams, GelSight at native resolution (FFV1 lossless), full-rate OptiTrack pose tracks for all three rigid bodies, and HF-native browser previews. The current processed/mode1_v1/ slice will remain as a stable training-task view.

Citation

If you use this dataset, please cite (TODO: add bibtex).