ManipArena Dataset
Training dataset for ManipArena, a real-robot benchmark and competition for bimanual manipulation at the CVPR 2026 Embodied AI Workshop.
This dataset provides rich multi-modal demonstrations in LeRobot format, covering 20 real-robot tasks and 3 simulation tasks. Beyond standard end-effector trajectories, we provide joint positions, velocities, currents, camera views, and mobile-base states — giving participants the freedom to explore diverse input representations.
Dataset Structure
maniparena-dataset/
├── real/
│ ├── execution_reasoning/ (10 tasks, ~5,000 episodes)
│ ├── semantic_reasoning/ (5 tasks, ~2,800 episodes)
│ └── mobile_manipulation/ (5 tasks, ~2,900 episodes)
└── sim/
├── press_button_in_order/ (60 episodes)
├── put_blocks_to_color/ (50 episodes)
└── pick_fruits_into_basket/ (50 episodes)
Each task folder follows LeRobot format:
<task>/
meta/info.json
meta/tasks.jsonl
data/chunk-000/episode_000000.parquet
videos/chunk-000/
observation.images.faceImg/episode_000000.mp4
observation.images.leftImg/episode_000000.mp4
observation.images.rightImg/episode_000000.mp4
Real Robot Data (56D)
Real-robot demonstrations contain 56-dimensional observation.state and action vectors. All data types are packed into a single vector in the following order.
Dimension Layout
End-effector (index 0–13):
| Index | Key | Dim | Description |
|---|---|---|---|
| 0–2 | follow_left_ee_cartesian_pos |
3 | Left arm position (x, y, z) |
| 3–5 | follow_left_ee_rotation |
3 | Left arm rotation (roll, pitch, yaw) |
| 6 | follow_left_gripper |
1 | Left gripper open/close |
| 7–9 | follow_right_ee_cartesian_pos |
3 | Right arm position (x, y, z) |
| 10–12 | follow_right_ee_rotation |
3 | Right arm rotation (roll, pitch, yaw) |
| 13 | follow_right_gripper |
1 | Right gripper open/close |
Coordinate system: +x forward, +y left, +z up.
Joint — left arm (index 14–32):
| Index | Key | Dim | Description |
|---|---|---|---|
| 14–19 | follow_left_arm_joint_pos |
6 | Left arm joint positions |
| 20–25 | follow_left_arm_joint_dev |
6 | Left arm joint velocities |
| 26–32 | follow_left_arm_joint_cur |
7 | Left arm joint currents (index 32 = gripper current) |
Joint — right arm (index 33–51):
| Index | Key | Dim | Description |
|---|---|---|---|
| 33–38 | follow_right_arm_joint_pos |
6 | Right arm joint positions |
| 39–44 | follow_right_arm_joint_dev |
6 | Right arm joint velocities |
| 45–51 | follow_right_arm_joint_cur |
7 | Right arm joint currents (index 51 = gripper current) |
Mobile manipulation extras (index 52–55, mobile tasks only):
| Index | Key | Dim | Description |
|---|---|---|---|
| 50–51 | head_actions |
2 | Head rotation (yaw, pitch) |
| 52 | height |
1 | Lift mechanism height |
| 53–55 | velocity_decomposed_odom |
3 | Chassis velocity (vx, vy, angular velocity) |
Tabletop tasks (Execution Reasoning + Semantic Reasoning): index 0–49 are populated; index 50–55 are zero. Mobile Manipulation tasks: all 56 dimensions are populated.
Task List — Real Robot
Execution Reasoning (10 tasks):
| Task | Episodes | Key Challenge |
|---|---|---|
arrange_cup_inverted_triangle |
528 | Multi-object spatial planning |
put_spoon_to_bowl |
525 | Precision grasping, varied shapes |
put_glasses_on_woodshelf |
513 | Fragile object handling |
put_ring_onto_rod |
517 | Sub-cm insertion precision |
put_items_into_drawer |
510 | Multi-object coordination |
pick_items_into_basket |
532 | Adaptive grasping |
pour_water_from_bottle |
526 | Force control, liquid dynamics |
insert_wireline |
530 | Contact-rich, mm-level accuracy |
put_stationery_in_case |
390 | Multi-object organization |
put_blocks_to_color |
451 | Color-zone matching |
Semantic Reasoning (5 tasks):
| Task | Episodes | Key Challenge |
|---|---|---|
sort_headphone |
515 | Recognize headphone type |
classify_items_as_shape |
545 | Map objects to shape categories |
press_button_in_order |
538 | Color-button mapping + sequence |
pair_up_items |
540 | Match pairs by pattern |
pick_fruits_into_basket |
645 | Fruit vs. non-fruit distinction |
Mobile Manipulation (5 tasks):
| Task | Episodes | Key Challenge |
|---|---|---|
put_clothes_in_hamper |
540 | Navigate + pick clothes |
hang_up_picture |
576 | Navigate to wall + hang |
organize_shoes |
595 | Navigate + arrange on rack |
put_bottle_on_woodshelf |
630 | Navigate to shelf + place |
take_and_set_tableware |
531 | Navigate + set table |
Simulation Data (28D)
Simulation demonstrations contain 28-dimensional observation.state and action vectors, combining end-effector (14D) and joint (14D) data from the same trajectories.
Dimension Layout
End-effector (index 0–13):
| Index | Key | Dim | Description |
|---|---|---|---|
| 0–2 | ee_left_xyz |
3 | Left arm EE position (x, y, z) |
| 3–5 | ee_left_rpy |
3 | Left arm EE rotation (roll, pitch, yaw) |
| 6 | ee_left_gripper |
1 | Left gripper |
| 7–9 | ee_right_xyz |
3 | Right arm EE position (x, y, z) |
| 10–12 | ee_right_rpy |
3 | Right arm EE rotation (roll, pitch, yaw) |
| 13 | ee_right_gripper |
1 | Right gripper |
Joint (index 14–27):
| Index | Key | Dim | Description |
|---|---|---|---|
| 14–19 | joint_left_pos |
6 | Left arm joint positions |
| 20 | joint_left_gripper |
1 | Left joint gripper |
| 21–26 | joint_right_pos |
6 | Right arm joint positions |
| 27 | joint_right_gripper |
1 | Right joint gripper |
The first 14 dimensions (EE) are directly compatible with real-robot index 0–13.
Task List — Simulation
| Task | Episodes | Real-robot Counterpart |
|---|---|---|
press_button_in_order |
60 | press_button_in_order |
put_blocks_to_color |
50 | put_blocks_to_color |
pick_fruits_into_basket |
50 | pick_fruits_into_basket |
Camera Views
All tasks include 3 synchronized camera streams at 480×640 resolution:
| Camera | Key | Description |
|---|---|---|
| Front | observation.images.faceImg |
Third-person overhead view |
| Left wrist | observation.images.leftImg |
Left arm wrist-mounted camera |
| Right wrist | observation.images.rightImg |
Right arm wrist-mounted camera |
Recording Frequency
All data is recorded at 20 Hz.
Quick Usage
import pandas as pd
import numpy as np
# Load one episode
df = pd.read_parquet("real/execution_reasoning/put_blocks_to_color/data/chunk-000/episode_000000.parquet")
state = np.stack(df["observation.state"].tolist()) # (T, 56) for real, (T, 28) for sim
action = np.stack(df["action"].tolist())
# EE data (first 14 dims — same layout for real and sim)
ee_left = state[:, 0:7] # xyz(3) + rpy(3) + gripper(1)
ee_right = state[:, 7:14]
# Joint data (real: index 14–51, sim: index 14–27)
left_joint_pos = state[:, 14:20] # 6 joint positions
right_joint_pos = state[:, 33:39] # real only (sim uses 21:27)
Citation
@misc{maniparena2026,
title={ManipArena: A Benchmark for Bimanual Manipulation},
year={2026},
url={https://maniparena.x2robot.com},
}
License
Apache License 2.0
- Downloads last month
- -