| | --- |
| | pretty_name: "EXOKERN ContactBench v0 — Peg Insertion with Force/Torque" |
| | license: cc-by-nc-4.0 |
| | tags: |
| | - robotics |
| | - force-torque |
| | - contact-rich |
| | - manipulation |
| | - insertion |
| | - lerobot |
| | - isaac-lab |
| | - forge |
| | - franka |
| | - simulation |
| | - benchmark |
| | - physical-ai |
| | - wrench |
| | language: |
| | - en |
| | size_categories: |
| | - 1K<n<10K |
| | task_categories: |
| | - robotics |
| | dataset_info: |
| | features: |
| | - name: observation.state |
| | dtype: float32 |
| | shape: |
| | - 24 |
| | - name: observation.wrench |
| | dtype: float32 |
| | shape: |
| | - 6 |
| | - name: action |
| | dtype: float32 |
| | shape: |
| | - 7 |
| | - name: timestamp |
| | dtype: float64 |
| | - name: frame_index |
| | dtype: int64 |
| | - name: episode_index |
| | dtype: int64 |
| | - name: index |
| | dtype: int64 |
| | - name: task_index |
| | dtype: int64 |
| | splits: |
| | - name: train |
| | num_examples: 330929 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: "data/**/*.parquet" |
| | --- |
| | |
| | # EXOKERN ContactBench v0 — Peg Insertion with Force/Torque |
| |
|
| | <p align="center"> |
| | <strong>The first publicly available insertion dataset with calibrated 6-axis force/torque annotations.</strong><br> |
| | <em>Part of the <a href="https://huggingface.co/EXOKERN">ContactBench</a> collection by EXOKERN — The Data Engine for Physical AI</em> |
| | </p> |
| |
|
| | <p align="center"> |
| | <img src="force_profile_sample.png" alt="Force profile during peg insertion episode" width="720"> |
| | </p> |
| |
|
| | --- |
| |
|
| | ## Why This Dataset Exists |
| |
|
| | Over 95% of existing robotics manipulation datasets contain **no force/torque data**. Yet contact-rich tasks — insertion, threading, snap-fit assembly — fundamentally depend on haptic feedback for reliable execution. Vision alone cannot distinguish a jammed peg from a seated one. |
| |
|
| | This dataset provides **2,221 peg-in-hole insertion episodes** with full **6-axis wrench data at every timestep**, generated using the [FORGE](https://arxiv.org/abs/2408.04587) (Force-Guided Exploration) framework in NVIDIA Isaac Lab. Every frame captures what the robot *feels*, not just what it sees. |
| |
|
| | --- |
| |
|
| | ## Dataset Overview |
| |
|
| | | Metric | Value | |
| | |---|---| |
| | | Episodes | 2,221 | |
| | | Total Frames | 330,929 | |
| | | Avg Episode Length | ~149 steps | |
| | | Control Frequency | 20 Hz | |
| | | Format | [LeRobot v3.0](https://github.com/huggingface/lerobot) (Parquet) | |
| | | Robot | Franka Emika Panda (7-DOF) | |
| | | Simulator | NVIDIA Isaac Lab 2.3.x + PhysX GPU | |
| | | Task | `Isaac-Forge-PegInsert-Direct-v0` | |
| | | Size | ~75 MB | |
| | | License | CC-BY-NC 4.0 | |
| |
|
| | --- |
| |
|
| | ## Features |
| |
|
| | Each frame contains the following tensors: |
| |
|
| | | Feature | Shape | Description | |
| | |---|---|---| |
| | | `observation.state` | `(24,)` | Flattened observation vector — see [State Tensor Semantics](#state-tensor-semantics) below | |
| | | **`observation.wrench`** | **`(6,)`** | **6-axis force/torque: [Fx, Fy, Fz, Mx, My, Mz] — see [Wrench Specification](#wrench-specification)** | |
| | | `action` | `(7,)` | Delta end-effector pose command [dx, dy, dz, dRx, dRy, dRz] + success prediction | |
| | | `timestamp` | scalar | Wall-clock time within episode (s) | |
| | | `frame_index` | int | Frame position within episode | |
| | | `episode_index` | int | Episode identifier | |
| |
|
| | --- |
| |
|
| | ## State Tensor Semantics |
| |
|
| | The `observation.state` tensor is a flat 24-element vector produced by FORGE's `collapse_obs_dict()`. It concatenates the following quantities in order: |
| |
|
| | | Index | Field | Dims | Unit | Description | |
| | |---|---|---|---|---| |
| | | 0–2 | `fingertip_pos` | 3 | m | End-effector (fingertip) position in world frame | |
| | | 3–5 | `fingertip_pos_rel_fixed` | 3 | m | EE position relative to the socket (fixed part) | |
| | | 6–9 | `fingertip_quat` | 4 | — | EE orientation as quaternion [w, x, y, z] | |
| | | 10–12 | `ee_linvel` | 3 | m/s | End-effector linear velocity | |
| | | 13–15 | `ee_angvel` | 3 | rad/s | End-effector angular velocity | |
| | | 16–21 | `force_sensor_smooth` | 6 | N, N·m | Smoothed 6-axis wrench (same data as `observation.wrench`) | |
| | | 22 | `force_threshold` | 1 | N | Maximum allowable contact force (FORGE parameter) | |
| | | 23 | `ema_factor` | 1 | — | Exponential moving average smoothing coefficient | |
| |
|
| | > **Implementation reference:** These fields correspond to `OBS_DIM_CFG` in [`factory_env_cfg.py`](https://github.com/isaac-sim/IsaacLab/blob/main/source/isaaclab_tasks/isaaclab_tasks/direct/factory/factory_env_cfg.py) (indices 0–15) extended by the FORGE force-sensing additions (indices 16–23). See [Noseworthy et al., 2024](https://arxiv.org/abs/2408.04587) §III for details. |
| |
|
| | --- |
| |
|
| | ## Wrench Specification |
| |
|
| | | Property | Value | |
| | |---|---| |
| | | **Coordinate frame** | **End-effector body frame** (Franka `panda_hand` link) | |
| | | **Convention** | [Fx, Fy, Fz, Mx, My, Mz] | |
| | | **Force unit** | Newtons (N) | |
| | | **Torque unit** | Newton-meters (N·m) | |
| | | **Source** | `env.unwrapped.force_sensor_smooth` — Isaac Lab contact sensor with EMA smoothing | |
| | | **Typical force range** | ±50 N per axis | |
| | | **Typical torque range** | ±10 N·m per axis | |
| |
|
| | The wrench is reported at the **end-effector body frame** attached to the Franka gripper link. Forces are measured via Isaac Lab's `get_link_incoming_joint_force()` method applied to the wrist joint, then smoothed with an exponential moving average (EMA factor configurable, default 0.2). |
| |
|
| | > **Note:** `observation.wrench` contains the same data as `observation.state[16:22]`. The wrench is stored as a dedicated column for direct access without index arithmetic. |
| |
|
| | --- |
| |
|
| | ## Quick Start |
| |
|
| | ```python |
| | from lerobot.datasets.lerobot_dataset import LeRobotDataset |
| | |
| | # Load dataset |
| | dataset = LeRobotDataset("EXOKERN/contactbench-forge-peginsert-v0") |
| | print(f"Episodes: {dataset.num_episodes}, Frames: {len(dataset)}") |
| | |
| | # Access a single frame |
| | frame = dataset[0] |
| | state = frame["observation.state"] # (24,) — full observation |
| | wrench = frame["observation.wrench"] # (6,) — force/torque |
| | |
| | # Decompose wrench |
| | force = wrench[:3] # [Fx, Fy, Fz] in N |
| | torque = wrench[3:] # [Mx, My, Mz] in N·m |
| | print(f"Force: {force} N") |
| | print(f"Torque: {torque} N·m") |
| | |
| | # Decompose state vector |
| | ee_pos = state[0:3] # fingertip position (m) |
| | ee_pos_rel = state[3:6] # position relative to socket (m) |
| | ee_quat = state[6:10] # orientation quaternion |
| | ee_linvel = state[10:13] # linear velocity (m/s) |
| | ee_angvel = state[13:16] # angular velocity (rad/s) |
| | wrench_state = state[16:22] # force/torque (N, N·m) — same as observation.wrench |
| | f_threshold = state[22] # force threshold (N) |
| | ema = state[23] # EMA smoothing factor |
| | ``` |
| |
|
| | ### Load with pure HuggingFace Datasets |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | ds = load_dataset("EXOKERN/contactbench-forge-peginsert-v0", split="train") |
| | print(ds[0].keys()) |
| | ``` |
| |
|
| | --- |
| |
|
| | ## Experimental Results: The Value of Force/Torque |
| |
|
| | We trained a Behavior Cloning (BC) policy on this dataset to ablate the impact of Force/Torque data on the Peg-In-Hole task. Both policies achieve a 100% insertion success rate, but the difference in physical execution is massive: |
| |
|
| | | Condition | Success Rate | Avg Contact Force (N) | |
| | |-----------|--------------|-----------------------| |
| | | Kinematics Only | 100.0% | 6.4 N | |
| | | **With Force/Torque** | **100.0%** | **0.1 N** | |
| |
|
| | **Conclusion:** Both policies solve the geometric task. But the F/T-aware policy performs the insertion softly like an expert, reducing contact forces by **98.4%**. In industrial applications, this is the difference between a successful assembly and a damaged part. |
| |
|
| | ### V2: Temporal Model (10-Frame Window) |
| |
|
| | Upgrading to a 1D-CNN that processes the last 10 frames improves overall prediction quality by 4x, and yields a 9.6% F/T advantage in offline MSE: |
| |
|
| | | Model | Condition | Val MSE | Avg Force (N) | |
| | |-------|-----------|---------|---------------| |
| | | V1 MLP | With F/T | 0.475 | 0.1 N | |
| | | V1 MLP | Without F/T | 0.520 | 6.4 N | |
| | | **V2 Temporal CNN** | **With F/T** | **0.119** | **3.0 N** | |
| | | V2 Temporal CNN | Without F/T | 0.132 | 4.7 N | |
| |
|
| | The temporal model dramatically improves both conditions, but especially helps the blind (no-F/T) policy compensate via position/velocity trends — confirming that **direct force feedback remains irreplaceable for gentle contact control**. |
| |
|
| | --- |
| |
|
| | ## Data Collection |
| |
|
| | | Parameter | Value | |
| | |---|---| |
| | | **RL Algorithm** | rl_games PPO (~200 epochs, reward ~352) | |
| | | **Environment** | `Isaac-Forge-PegInsert-Direct-v0` | |
| | | **Collection mode** | Single-env rollout (`num_envs=1`), deterministic policy | |
| | | **Episode horizon** | Fixed 149 steps (no early termination during collection) | |
| | | **Sensor bandwidth** | 20 Hz (matched to control frequency) | |
| | | **Contact dynamics** | PhysX GPU solver, Isaac Lab default parameters | |
| | | **Domain randomization** | FORGE defaults (controller gains, friction, mass, dead-zone) | |
| |
|
| | --- |
| |
|
| | ## Intended Use |
| |
|
| | **Research applications:** |
| | - Training and benchmarking force-aware manipulation policies |
| | - Sim-to-real transfer studies for contact-rich assembly |
| | - Hybrid vision + force/torque policy architectures |
| | - Evaluating the effect of F/T data on manipulation performance |
| |
|
| | **Not intended for:** |
| | - Direct deployment on physical robots without sim-to-real calibration |
| | - Safety-critical applications without additional validation |
| |
|
| | --- |
| |
|
| | ## Reproduction |
| |
|
| | Methodology documentation available upon request. |
| | Visit [exokern.com](https://exokern.com) for details. |
| |
|
| | --- |
| |
|
| | ## Related Work |
| |
|
| | This dataset builds on the following research: |
| |
|
| | - **FORGE** — Noseworthy et al., "[Force-Guided Exploration for Robust Contact-Rich Manipulation under Uncertainty](https://arxiv.org/abs/2408.04587)" (ISRR 2024) |
| | - **Factory** — Narang et al., "[Factory: Fast Contact for Robotic Assembly](https://arxiv.org/abs/2205.03532)" (RSS 2022) |
| | - **IndustReal** — Tang et al., "[IndustReal: Transferring Contact-Rich Assembly Tasks from Simulation to Reality](https://arxiv.org/abs/2305.17110)" (RSS 2023) |
| | - **LeRobot** — Cadene et al., "[LeRobot: State-of-the-art Machine Learning for Real-World Robotics](https://github.com/huggingface/lerobot)" (2024) |
| |
|
| | --- |
| |
|
| | ## License |
| |
|
| | **CC-BY-NC 4.0** — Free for research and non-commercial use. |
| |
|
| | Commercial licensing and custom Contact Skill Packs available from EXOKERN. |
| | Visit [exokern.com](https://exokern.com) for enterprise inquiries. |
| |
|
| | --- |
| |
|
| | ## About EXOKERN |
| |
|
| | **EXOKERN — The Data Engine for Physical AI** |
| |
|
| | We produce industrially calibrated force/torque manipulation data for enterprise robotics, humanoid manufacturers, and research institutions. Contact-rich. Sensor-annotated. Industrially validated. |
| |
|
| | 🌐 [exokern.com](https://exokern.com) · 🤗 [huggingface.co/EXOKERN](https://huggingface.co/EXOKERN) |
| |
|
| | --- |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | @dataset{exokern_contactbench_peginsert_v0, |
| | title = {ContactBench v0: Peg Insertion with Force/Torque}, |
| | author = {{EXOKERN}}, |
| | year = {2026}, |
| | url = {https://huggingface.co/datasets/EXOKERN/contactbench-forge-peginsert-v0}, |
| | note = {2,221 episodes, 330K frames, 6-axis F/T, LeRobot v3.0 format} |
| | } |
| | ``` |
| |
|