ForceFlow / README.md
JokerESC's picture
Upload README.md with huggingface_hub
2327719 verified
metadata
license: mit
task_categories:
  - robotics
tags:
  - robotics
  - manipulation
  - contact-rich-manipulation
  - force-torque
  - imitation-learning
  - flow-matching
  - zarr
pretty_name: ForceFlow Dataset
size_categories:
  - 10G<n<100G

ForceFlow Dataset

ForceFlow: Learning to Feel and Act via Contact-Driven Flow Matching

[Project Page] | [Code]

motivation

Contact-rich manipulation remains one of the hardest problems in robot learning: vision alone cannot capture the high-frequency contact dynamics that determine whether a plug seats correctly, a stamp triggers cleanly, or a wipe exerts consistent pressure. This dataset is collected to support ForceFlow, a force-aware reactive framework built on flow matching that addresses this gap.

ForceFlow fuses temporal force/torque history with visual observations through an asymmetric multimodal design — force history acts as a global regulation signal to prevent it from being overshadowed by high-dimensional image features, while a hybrid action space jointly predicts end-effector motion and expected next-step contact force. To handle spatial generalization, ForceFlow introduces a Vision-to-Force (V2F) handover: a VLM first localizes the target in the scene, then control passes to the force-aware policy for precise local contact interaction.

This dataset contains 7 real-robot teleoperated demonstration tasks spanning two categories of contact-rich manipulation, collected on a UFACTORY xArm6 equipped with a 6-axis wrist F/T sensor and dual Intel RealSense cameras.


Tasks

Short-horizon contact — tasks requiring precise force application at a specific moment:

Task Episodes Total Steps Key Challenge
stamp 100 45,867 Visual ambiguity in paper thickness; force-triggered stamping
plug 100 50,107 Coarse visual alignment with force-guided insertion
press_button 50 23,396 Varying spring constants and trigger depths
insert 50 25,032 Sub-millimeter tolerance and geometric jamming

Continuous contact — tasks requiring sustained force regulation throughout execution:

Task Episodes Total Steps Key Challenge
clean_whiteboard 100 56,810 Stable normal force tracking on a planar surface
clean_vase 50 85,478 Adaptive force regulation on a curved, non-linear surface
peel 50 38,564 Consistent peel force on adhesive tape

Data Format

Each task is provided in two formats:

  • <task>.zarr/ — Zarr v2 directory store, ready for direct training use
  • <task>.zip — Zipped archive of the same zarr store
  • <task>_normalizer.json — Pre-computed normalizer statistics (mean/std) for all fields

Zarr Structure

<task>.zarr/
├── data/
│   ├── action          (N, 6)   float32  — end-effector delta pose (6-DOF)
│   ├── pos             (N, 6)   float32  — end-effector absolute pose
│   ├── force           (N, 6)   float32  — raw F/T sensor readings
│   ├── delta_force     (N, 6)   float32  — force delta (not in `peel`)
│   ├── gripper_action  (N, 1)   float32  — gripper command (0=open, 1=close)
│   ├── gripper_state   (N, 1)   float32  — gripper current state
│   ├── rgb_arm         (N, 3, 240, 320)  uint8 — wrist camera (JPEG-compressed)
│   └── rgb_fix         (N, 3, 240, 320)  uint8 — fixed camera (JPEG-compressed)
└── meta/
    └── episode_ends    (E,)     uint32   — cumulative step index at each episode end

Note: The peel task does not contain the delta_force field.

RGB arrays are stored with a custom JPEG codec. To read them, install image_codecs from the ForceFlow repo and register the codec before opening the zarr store.


Usage

Prerequisites

git clone --recurse-submodules https://github.com/JokerESC/ForceFlow.git
cd ForceFlow
pip install -r requirements.txt
pip install -e CleanDiffuser/

Load a dataset

import sys
sys.path.insert(0, 'path/to/ForceFlow/CleanDiffuser')

import numcodecs
import image_codecs
numcodecs.register_codec(image_codecs.jpeg)

import zarr
import numpy as np

z = zarr.open('plug.zarr', 'r')

episode_ends = z['meta/episode_ends'][:]   # shape (100,)
actions      = z['data/action'][:]         # shape (50107, 6)
forces       = z['data/force'][:]          # shape (50107, 6)
rgb_arm      = z['data/rgb_arm'][:]        # shape (50107, 3, 240, 320)

# Reconstruct per-episode slices
starts = np.concatenate([[0], episode_ends[:-1]])
for ep_idx, (s, e) in enumerate(zip(starts, episode_ends)):
    ep_actions = actions[s:e]   # (T, 6)
    ep_forces  = forces[s:e]    # (T, 6)

Training with ForceFlow

# Edit configs/xarm.yaml to point to the downloaded data
python -m pipeline.train --config configs/xarm.yaml

Hardware

Component Details
Robot arm UFACTORY xArm6
F/T sensor 6-axis wrist force/torque sensor
Wrist camera Intel RealSense D435
Fixed camera Intel RealSense L515
Teleoperation 3Dconnexion SpaceMouse

License

MIT — see LICENSE.


Citation

If you use this dataset, please cite:

@misc{forceflow2025,
  title  = {ForceFlow: Learning to Feel and Act via Contact-Driven Flow Matching},
  author = {JokerESC},
  year   = {2025},
  url    = {https://github.com/JokerESC/ForceFlow}
}