Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

React

Multimodal manipulation recordings from a bimanual setup with vision-based tactile sensors and motion capture.

Recording setup

Stream Hardware Native shape Rate
3× RealSense color Intel D415 (serials 143322063538, 104122062574, 217222066989) 480×640×3 uint8 (BGR) 30 FPS
3× RealSense depth same 480×640 uint16 (mm) 30 FPS
2× GelSight tactile GelSight Mini (left / right) 480×640×3 uint8 ~25 FPS, resampled to camera ticks
3× OptiTrack rigid bodies motherboard, sensor_left, sensor_right 7-vector (x, y, z, qx, qy, qz, qw) ~120 Hz

All streams share a common monotonic timestamp axis recorded under timestamps. Per-tracker OptiTrack streams carry their own higher-rate timestamps under optitrack/<body>/timestamps.

Tasks

The dataset is organized task-first so new tasks can be added without renaming or recompute. Current tasks:

Task Description Dates Episodes
motherboard Bimanual manipulation of components on a computer motherboard 2026-03-23, 2026-05-10, 2026-05-11 30

tasks.json at the repo root is the source of truth for the task registry.

Repository layout

tasks.json                                          # registry
processed/
└─ mode1_v1/
    └─ <task>/
        └─ <date>/
            ├─ episode_000.pt
            ├─ episode_000.contact.json
            └─ ...

The processed/mode1_v1/ view is a task-specific slice of the underlying raw recordings, not the full sensor suite. It was produced by twm/preprocess.py + twm/contact_index.py from a private raw HDF5 mirror.

processed/mode1_v1/ schema

Each episode_*.pt is a Python dict loadable with torch.load(..., weights_only=False).

Key Shape dtype Description
view (T, 3, 128, 128) uint8 Overhead camera (realsense/cam0/color), center-cropped to square then bilinear-resized to 128×128
tactile_left (T, 3, 128, 128) uint8 Left GelSight, same crop/resize
tactile_right (T, 3, 128, 128) uint8 Right GelSight, same crop/resize
timestamps (T,) float64 Camera timestamps (seconds, monotonic clock)
sensor_left_pose (T, 7) float32 Left GelSight rigid body OptiTrack pose, nearest-neighbor aligned to camera timestamps
sensor_right_pose (T, 7) float32 Right GelSight rigid body OptiTrack pose, same alignment
tactile_{left,right}_intensity (T,) float32 Per-frame mean per-pixel L2 distance from a contact-free reference frame
tactile_{left,right}_area (T,) float32 Per-frame fraction of pixels with L2 diff > tau
tactile_{left,right}_mixed (T,) float32 Mean of (diff × mask), captures intensity restricted to contact pixels
_contact_meta dict Per-episode contact metadata: tau, drift between first/p01 reference frames, p01 reference indices, the chosen reference RGB frames, etc.

Each .contact.json is a small summary of the metric distributions plus drift diagnostics, intended for filtering / sanity checking without loading the full tensors.

Contact metric definition

For each tactile sensor independently:

  1. Pick a contact-free reference frame: the ~0.1th-percentile-quietest frame by mean L2 distance to the temporal median (reference_strategy = "p01").
  2. For each frame t, compute per-pixel diff[t, x, y] = || frame[t, :, x, y] − ref[:, x, y] ||_2 (RGB L2 over channels).
  3. Then:
    • intensity[t] = mean(diff[t])
    • area[t] = mean(diff[t] > tau) (default tau = 8.0 on the uint8 scale)
    • mixed[t] = mean(diff[t] * (diff[t] > tau))

_contact_meta["drift_warning"] is True if either sensor's drift (L2 distance between the first frame and the p01-reference frame) exceeds 2·tau; in this release no episode triggers it.

Quick start

Load one task via datasets:

from datasets import load_dataset
ds = load_dataset("yxma/React", "motherboard", split="train")
# Each row is one .pt file path; the actual tensors live inside.

Or load a single episode directly:

import torch
from huggingface_hub import hf_hub_download

path = hf_hub_download(
    repo_id="yxma/React",
    repo_type="dataset",
    filename="processed/mode1_v1/motherboard/2026-05-11/episode_003.pt",
)
ep = torch.load(path, weights_only=False)
print(ep["view"].shape, ep["tactile_left_intensity"].shape)
# torch.Size([10032, 3, 128, 128]) torch.Size([10032])

Load contact metadata only (much smaller — useful for filtering):

import json
path = hf_hub_download(
    repo_id="yxma/React",
    repo_type="dataset",
    filename="processed/mode1_v1/motherboard/2026-05-11/episode_003.contact.json",
)
meta = json.load(open(path))
print(meta["drift_left"], meta["drift_right"], meta["drift_warning"])

Known caveats

  • Missing / dropped episodes on motherboard/2026-05-11:
    • episode_000 and episode_002 — short test recordings (8.8s and 10.4s) with no tactile contact on either sensor; intentionally excluded.
    • episode_001 — lost at recording time (HDF5 superblock never finalized when the writer was killed mid-write); intentionally absent.
    • The remaining episode IDs are non-contiguous as a result. Don't infer ordering from filename gaps.
  • Lossy resize: the 128×128 view and tactile fields are downsampled from native 480×640. Native resolution is not preserved in this release.
  • Single camera: only realsense/cam0/color is included. The other two RealSense views and all depth streams are not in processed/mode1_v1/.
  • OptiTrack alignment: the per-step poses are nearest-neighbor matched to camera ticks. The full ~120 Hz pose streams are not preserved here.
  • Mode is opinionated: contact metrics depend on the chosen tau and the p01 reference strategy. If you want a different tau, re-deriving from raw is necessary.

Roadmap

  • More tasks — registry in tasks.json will grow.
  • LeRobot-format full-fidelity variant (lerobot/v1.0/) is planned. It will include all three RealSense color and depth streams, GelSight at native resolution (FFV1 lossless), full-rate OptiTrack pose tracks for all three rigid bodies, and HF-native browser previews. The current processed/mode1_v1/ slice will remain as a stable training-task view.

Citation

If you use this dataset, please cite (TODO: add bibtex).

Downloads last month
17