yxma commited on
Commit
06bec85
·
verified ·
1 Parent(s): 0742bc7

README cleanup: fix stale 75 % bimanual-contact claim (actual 64.3 % post-trim); reference freeze_intervals.json + MP4 previews + metadata/episodes.parquet that previous batches added.

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -45,7 +45,7 @@ Dense, contact-rich, synchronized multimodal interaction data collected from **h
45
  |---|---|
46
  | **Robot-arm-free** | Recorded directly from a human operator holding two GelSight Mini sensors. No robot kinematics, no embodiment bias, no robot occluding the scene. |
47
  | **Tactile + RGB-D + mocap, simultaneous** | Most manipulation datasets ship one of these. React ships all three, synchronized to a common 30 Hz clock. |
48
- | **Contact-dense** | 66 % of all frames have confirmed tactile contact on at least one sensor — see [`figures/contact_intensity_full.png`](figures/contact_intensity_full.png). |
49
  | **Long, continuous interaction** | Recordings are minutes long, not seconds. Median recording duration is 4 min; longest 19 min. Good for short-window sampling of dynamics, not for action-conditioned policy learning. |
50
 
51
  ![Comparison with other manipulation datasets](figures/dataset_figures/F7_comparison_table.png)
@@ -57,7 +57,7 @@ Dense, contact-rich, synchronized multimodal interaction data collected from **h
57
  | Embodiment | **Human hands (no robot)** — handheld GelSight sensors with motion-capture rigid bodies |
58
  | Intended use | Dynamics / world-model learning over short multimodal windows. Sample short trajectories (1 s – 10 s); recording-file boundaries are not action boundaries. |
59
  | Total synchronized duration | **105.7 min** at 30 Hz (190,231 multimodal frames, post-trim) |
60
- | Bimanual tactile-contact time | ** 75 % of post-trim frames** (median event duration 0.73 s; see `data_quality_report.csv` for per-file numbers) |
61
  | Cameras | 3× Intel RealSense D415 (color + depth), 480×640, 30 FPS |
62
  | Tactile | 2× GelSight Mini (left, right), handheld |
63
  | Motion capture | OptiTrack VRPN, 3 rigid bodies, ~120 Hz |
@@ -84,7 +84,7 @@ See [`tasks.json`](tasks.json) for the machine-readable registry (per-date `acti
84
  | OptiTrack track loss | 1,680 | 0.883 % | 6 | Marker briefly left mocap-volume / camera FOV mid-episode |
85
  | **Total (union)** | **1,768** | **0.929 %** | **11** | |
86
 
87
- Every flagged interval is in [`bad_frames.json`](bad_frames.json) keyed by `episode/episode_*` with TRIMMED-pt frame indices. Skip-list usage is shown below and in [`docs/quality.md`](docs/quality.md). Long start-of-episode OT-uninitialized prefixes (the dominant problem in the raw recordings) have already been trimmed from the published `.pt` files — see [`docs/caveats.md`](docs/caveats.md).
88
 
89
  ## Quick start
90
 
@@ -174,7 +174,7 @@ Full demo script: [`examples/demo_react_window.py`](examples/demo_react_window.p
174
 
175
  ## Recording-file previews
176
 
177
- Per-file GIF previews live under [`figures/episode_previews/`](figures/episode_previews) first 2 minutes at 10× speed, showing all 3 RealSense cameras with projected GelSight axes plus both tactile pads. (The on-disk recording unit is called an "episode" purely for file naming — these boundaries don't carry semantic / action meaning for this dataset.)
178
 
179
  ## Repository layout
180
 
 
45
  |---|---|
46
  | **Robot-arm-free** | Recorded directly from a human operator holding two GelSight Mini sensors. No robot kinematics, no embodiment bias, no robot occluding the scene. |
47
  | **Tactile + RGB-D + mocap, simultaneous** | Most manipulation datasets ship one of these. React ships all three, synchronized to a common 30 Hz clock. |
48
+ | **Contact-dense** | **64 % of post-trim frames** have confirmed tactile contact on at least one sensor — see [`figures/contact_intensity_full.png`](figures/contact_intensity_full.png). |
49
  | **Long, continuous interaction** | Recordings are minutes long, not seconds. Median recording duration is 4 min; longest 19 min. Good for short-window sampling of dynamics, not for action-conditioned policy learning. |
50
 
51
  ![Comparison with other manipulation datasets](figures/dataset_figures/F7_comparison_table.png)
 
57
  | Embodiment | **Human hands (no robot)** — handheld GelSight sensors with motion-capture rigid bodies |
58
  | Intended use | Dynamics / world-model learning over short multimodal windows. Sample short trajectories (1 s – 10 s); recording-file boundaries are not action boundaries. |
59
  | Total synchronized duration | **105.7 min** at 30 Hz (190,231 multimodal frames, post-trim) |
60
+ | Bimanual tactile-contact time | **64.3 % of post-trim frames** (3,302 contact events, median 0.73 s; see [`figures/dataset_figures/F2_contact_event_duration_histogram.png`](figures/dataset_figures/F2_contact_event_duration_histogram.png) and [`metadata/episodes.parquet`](metadata/episodes.parquet) for per-file numbers) |
61
  | Cameras | 3× Intel RealSense D415 (color + depth), 480×640, 30 FPS |
62
  | Tactile | 2× GelSight Mini (left, right), handheld |
63
  | Motion capture | OptiTrack VRPN, 3 rigid bodies, ~120 Hz |
 
84
  | OptiTrack track loss | 1,680 | 0.883 % | 6 | Marker briefly left mocap-volume / camera FOV mid-episode |
85
  | **Total (union)** | **1,768** | **0.929 %** | **11** | |
86
 
87
+ Every flagged interval is in [`bad_frames.json`](bad_frames.json) keyed by `episode/episode_*` with TRIMMED-pt frame indices. A richer per-event view (with cross-modal motion + OT-gap + angular-velocity stats) lives in [`freeze_intervals.json`](freeze_intervals.json). Skip-list usage is shown below and in [`docs/quality.md`](docs/quality.md). Long start-of-episode OT-uninitialized prefixes (the dominant problem in the raw recordings) have already been trimmed from the published `.pt` files — see [`docs/caveats.md`](docs/caveats.md).
88
 
89
  ## Quick start
90
 
 
174
 
175
  ## Recording-file previews
176
 
177
+ Per-file previews live under [`figures/episode_previews/`](figures/episode_previews) as both `.gif` and `.mp4` (MP4s render inline on HF and are ~30× smaller). Each shows 60 frames evenly sampled across the episode in the recording-viewer layout: 3 RealSense cameras with projected GelSight axes, GelSight raw + diff thumbs, OptiTrack pose text panel. (The on-disk recording unit is called an "episode" purely for file naming — these boundaries don't carry semantic / action meaning for this dataset.)
178
 
179
  ## Repository layout
180