--- license: mit task_categories: - robotics tags: - robotics - manipulation - contact-rich-manipulation - force-torque - imitation-learning - flow-matching - zarr pretty_name: ForceFlow Dataset size_categories: - 10G.zarr/`** — Zarr v2 directory store, ready for direct training use - **`.zip`** — Zipped archive of the same zarr store - **`_normalizer.json`** — Pre-computed normalizer statistics (mean/std) for all fields ### Zarr Structure ``` .zarr/ ├── data/ │ ├── action (N, 6) float32 — end-effector delta pose (6-DOF) │ ├── pos (N, 6) float32 — end-effector absolute pose │ ├── force (N, 6) float32 — raw F/T sensor readings │ ├── delta_force (N, 6) float32 — force delta (not in `peel`) │ ├── gripper_action (N, 1) float32 — gripper command (0=open, 1=close) │ ├── gripper_state (N, 1) float32 — gripper current state │ ├── rgb_arm (N, 3, 240, 320) uint8 — wrist camera (JPEG-compressed) │ └── rgb_fix (N, 3, 240, 320) uint8 — fixed camera (JPEG-compressed) └── meta/ └── episode_ends (E,) uint32 — cumulative step index at each episode end ``` > **Note:** The `peel` task does not contain the `delta_force` field. RGB arrays are stored with a custom JPEG codec. To read them, install [image_codecs](https://github.com/JokerESC/ForceFlow/tree/main/CleanDiffuser/image_codecs) from the ForceFlow repo and register the codec before opening the zarr store. --- ## Usage ### Prerequisites ```bash git clone --recurse-submodules https://github.com/JokerESC/ForceFlow.git cd ForceFlow pip install -r requirements.txt pip install -e CleanDiffuser/ ``` ### Load a dataset ```python import sys sys.path.insert(0, 'path/to/ForceFlow/CleanDiffuser') import numcodecs import image_codecs numcodecs.register_codec(image_codecs.jpeg) import zarr import numpy as np z = zarr.open('plug.zarr', 'r') episode_ends = z['meta/episode_ends'][:] # shape (100,) actions = z['data/action'][:] # shape (50107, 6) forces = z['data/force'][:] # shape (50107, 6) rgb_arm = z['data/rgb_arm'][:] # shape (50107, 3, 240, 320) # Reconstruct per-episode slices starts = np.concatenate([[0], episode_ends[:-1]]) for ep_idx, (s, e) in enumerate(zip(starts, episode_ends)): ep_actions = actions[s:e] # (T, 6) ep_forces = forces[s:e] # (T, 6) ``` ### Training with ForceFlow ```bash # Edit configs/xarm.yaml to point to the downloaded data python -m pipeline.train --config configs/xarm.yaml ``` --- ## Hardware | Component | Details | |---|---| | Robot arm | UFACTORY xArm6 | | F/T sensor | 6-axis wrist force/torque sensor | | Wrist camera | Intel RealSense D435 | | Fixed camera | Intel RealSense L515 | | Teleoperation | 3Dconnexion SpaceMouse | --- ## License MIT — see [LICENSE](https://github.com/JokerESC/ForceFlow/blob/main/LICENSE). --- ## Citation If you use this dataset, please cite: ```bibtex @misc{forceflow2025, title = {ForceFlow: Learning to Feel and Act via Contact-Driven Flow Matching}, author = {JokerESC}, year = {2025}, url = {https://github.com/JokerESC/ForceFlow} } ```