Datasets:
ManipArena Dataset
Training dataset for ManipArena, a real-robot benchmark and competition for bimanual manipulation at the CVPR 2026 Embodied AI Workshop.
This dataset provides rich multi-modal demonstrations in LeRobot format, covering 20 real-robot tasks and 3 simulation tasks. Beyond standard end-effector trajectories, we provide joint positions, velocities, currents, camera views, and mobile-base states — giving participants the freedom to explore diverse input representations.
Changelog
2026-05-12
Added
- Three-level language annotations for the 15 tabletop real-robot tasks (
execution_reasoningx10 +semantic_reasoningx5) underlanguage_annotations/real/<category>/<task>.jsonl. Each episode now has a short task name, a detailed action description, and a scene + action description (see the Language Annotations section).
Changed
- Replaced the earlier 60 / 50 / 50-episode simulation data with high-quality refreshed demonstrations for the three sim tasks. The updated simulation set has larger episode counts (660 / 501 / 501) recorded with an
ex001_6rdual-arm robot, and the state/action layout is now a combined 28D = end-effector (14D) + joint (14D). Front-camera resolution is now 720×1280 (wrist cameras remain 480×640).press_button_in_ordernow has 11 task variants (different button-color orderings).
Unchanged
- All
real/<category>/<task>/{meta,data,videos}trajectory folders are untouched.
2026-03-17
Initial release
- 20 real-robot tasks (10
execution_reasoning+ 5semantic_reasoning+ 5mobile_manipulation) in LeRobot v2.1 format. - 3 simulation tasks (60 / 50 / 50 episodes).
- 3 synchronized camera streams (front + left wrist + right wrist), 20 Hz recording.
Dataset Structure
maniparena-dataset/
├── real/
│ ├── execution_reasoning/ (10 tasks, ~5,000 episodes)
│ ├── semantic_reasoning/ (5 tasks, ~2,800 episodes)
│ └── mobile_manipulation/ (5 tasks, ~2,900 episodes)
└── sim/
├── press_button_in_order/ (660 episodes, 11 task variants)
├── put_blocks_to_color/ (501 episodes)
└── pick_fruits_into_basket/ (501 episodes)
Each task folder follows LeRobot format:
<task>/
meta/info.json
meta/tasks.jsonl
data/chunk-000/episode_000000.parquet
videos/chunk-000/
observation.images.faceImg/episode_000000.mp4
observation.images.leftImg/episode_000000.mp4
observation.images.rightImg/episode_000000.mp4
Real Robot Data
Tabletop tasks (Execution Reasoning + Semantic Reasoning) have 56-dimensional state/action vectors. Mobile Manipulation tasks have 62-dimensional state/action vectors (56D + 6D mobile extras).
Dimension Layout
End-effector (index 0–13, 14D):
| Index | Key | Dim | Description |
|---|---|---|---|
| 0–2 | follow_left_ee_cartesian_pos |
3 | Left arm position (x, y, z) |
| 3–5 | follow_left_ee_rotation |
3 | Left arm rotation (roll, pitch, yaw) |
| 6 | follow_left_gripper |
1 | Left gripper open/close |
| 7–9 | follow_right_ee_cartesian_pos |
3 | Right arm position (x, y, z) |
| 10–12 | follow_right_ee_rotation |
3 | Right arm rotation (roll, pitch, yaw) |
| 13 | follow_right_gripper |
1 | Right gripper open/close |
Coordinate system: +x forward, +y left, +z up.
Joint — left arm (index 14–34, 21D):
| Index | Key | Dim | Description |
|---|---|---|---|
| 14–20 | follow_left_arm_joint_pos |
7 | Left arm joint positions (6 joints + gripper) |
| 21–27 | follow_left_arm_joint_dev |
7 | Left arm joint velocities (6 joints + gripper) |
| 28–34 | follow_left_arm_joint_cur |
7 | Left arm joint currents (6 joints + gripper) |
Joint — right arm (index 35–55, 21D):
| Index | Key | Dim | Description |
|---|---|---|---|
| 35–41 | follow_right_arm_joint_pos |
7 | Right arm joint positions (6 joints + gripper) |
| 42–48 | follow_right_arm_joint_dev |
7 | Right arm joint velocities (6 joints + gripper) |
| 49–55 | follow_right_arm_joint_cur |
7 | Right arm joint currents (6 joints + gripper) |
The last element (index 20, 27, 34, 41, 48, 55) in each 7D joint group is the gripper value.
Mobile manipulation extras (index 56–61, mobile tasks only, 6D):
| Index | Key | Dim | Description |
|---|---|---|---|
| 56–57 | head_actions |
2 | Head rotation (yaw, pitch) |
| 58 | height |
1 | Lift mechanism height |
| 59–61 | velocity_decomposed_odom |
3 | Chassis velocity (vx, vy, angular velocity) |
Tabletop tasks = 56D (index 0–55). Mobile Manipulation tasks = 62D (index 0–61).
Task List — Real Robot
Execution Reasoning (10 tasks):
| Task | Episodes | Key Challenge |
|---|---|---|
arrange_cup_inverted_triangle |
528 | Multi-object spatial planning |
put_spoon_to_bowl |
525 | Precision grasping, varied shapes |
put_glasses_on_woodshelf |
513 | Fragile object handling |
put_ring_onto_rod |
517 | Sub-cm insertion precision |
put_items_into_drawer |
510 | Multi-object coordination |
pick_items_into_basket |
532 | Adaptive grasping |
pour_water_from_bottle |
526 | Force control, liquid dynamics |
insert_wireline |
530 | Contact-rich, mm-level accuracy |
put_stationery_in_case |
525 | Multi-object organization |
put_blocks_to_color |
451 | Color-zone matching |
Semantic Reasoning (5 tasks):
| Task | Episodes | Key Challenge |
|---|---|---|
sort_headphone |
515 | Recognize headphone type |
classify_items_as_shape |
545 | Map objects to shape categories |
press_button_in_order |
538 | Color-button mapping + sequence |
pair_up_items |
540 | Match pairs by pattern |
pick_fruits_into_basket |
645 | Fruit vs. non-fruit distinction |
Mobile Manipulation (5 tasks):
| Task | Episodes | Key Challenge |
|---|---|---|
put_clothes_in_hamper |
540 | Navigate + pick clothes |
hang_up_picture |
576 | Navigate to wall + hang |
organize_shoes |
595 | Navigate + arrange on rack |
put_bottle_on_woodshelf |
630 | Navigate to shelf + place |
take_and_set_tableware |
531 | Navigate + set table |
Simulation Data (28D)
Simulation demonstrations contain 28-dimensional observation.state and action vectors, combining end-effector (14D) and joint (14D) representations of the same trajectories produced by an ex001_6r dual-arm robot.
Dimension Layout
End-effector (index 0–13):
| Index | Key | Dim | Description |
|---|---|---|---|
| 0–2 | left_pos_x/y/z |
3 | Left arm EE position (x, y, z) |
| 3–5 | left_rot_x/y/z |
3 | Left arm EE rotation (roll, pitch, yaw) |
| 6 | left_gripper |
1 | Left gripper |
| 7–9 | right_pos_x/y/z |
3 | Right arm EE position (x, y, z) |
| 10–12 | right_rot_x/y/z |
3 | Right arm EE rotation (roll, pitch, yaw) |
| 13 | right_gripper |
1 | Right gripper |
Joint (index 14–27):
| Index | Key | Dim | Description |
|---|---|---|---|
| 14–19 | left_arm_joint1..6 |
6 | Left arm joint positions |
| 20 | left_arm_gripper |
1 | Left gripper (joint side) |
| 21–26 | right_arm_joint1..6 |
6 | Right arm joint positions |
| 27 | right_arm_gripper |
1 | Right gripper (joint side) |
The first 14 dimensions (EE) are directly compatible with real-robot index 0–13.
Task List — Simulation
| Task | Episodes | Task variants | Real-robot Counterpart |
|---|---|---|---|
press_button_in_order |
660 | 11 color orders | semantic_reasoning/press_button_in_order |
pick_fruits_into_basket |
501 | 1 | semantic_reasoning/pick_fruits_into_basket |
put_blocks_to_color |
501 | 1 | execution_reasoning/put_blocks_to_color |
press_button_in_orderhas 11 task variants (different button-color orderings) indexed viatask_indexin each parquet; seemeta/tasks.jsonl.
Camera Views
All tasks include 3 synchronized camera streams. Real-robot videos are 480×640. Updated simulation videos use 720×1280 for the front camera and 480×640 for wrist cameras.
| Camera | Key | Description |
|---|---|---|
| Front | observation.images.faceImg |
Third-person overhead view |
| Left wrist | observation.images.leftImg |
Left arm wrist-mounted camera |
| Right wrist | observation.images.rightImg |
Right arm wrist-mounted camera |
Recording Frequency
All data is recorded at 20 Hz.
Language Annotations
For the 15 tabletop tasks (10 execution_reasoning + 5 semantic_reasoning), we provide three-level language annotations in a dedicated top-level folder:
language_annotations/
└── real/
├── execution_reasoning/
│ ├── arrange_cup_inverted_triangle.jsonl
│ ├── insert_wireline.jsonl
│ └── ... (10 tasks)
└── semantic_reasoning/
├── classify_items_as_shape.jsonl
└── ... (5 tasks)
Each .jsonl file follows the LeRobot episodes.jsonl schema (one JSON object per episode, aligned with real/<category>/<task>/data/... via episode_index). The tasks field contains three strings:
| Index | Type | Description |
|---|---|---|
tasks[0] |
Short task | One-sentence task name (e.g. "Arrange the cups into an inverted triangle.") — identical to real/<task>/meta/episodes.jsonl |
tasks[1] |
Detailed action | Step-by-step description of what the robot does, including object attributes and target configuration |
tasks[2] |
Scene + action | Full scene description (objects, layout, background) followed by the detailed action |
mobile_manipulation and sim tasks are not covered yet — their real/<task>/meta/episodes.jsonl and sim/<task>/meta/episodes.jsonl still contain only the short task name.
import json, pandas as pd
# Load language annotations
with open("language_annotations/real/execution_reasoning/arrange_cup_inverted_triangle.jsonl") as f:
annos = {json.loads(line)["episode_index"]: json.loads(line)["tasks"] for line in f}
# Join with trajectory data
df = pd.read_parquet("real/execution_reasoning/arrange_cup_inverted_triangle/data/chunk-000/episode_000000.parquet")
short, action, scene_and_action = annos[0]
Quick Usage
import pandas as pd
import numpy as np
# Load one episode
df = pd.read_parquet("real/execution_reasoning/put_blocks_to_color/data/chunk-000/episode_000000.parquet")
state = np.stack(df["observation.state"].tolist()) # (T, 56) for tabletop real, (T, 62) for mobile, (T, 28) for sim
action = np.stack(df["action"].tolist())
# EE data (first 14 dims — same layout for real and sim)
ee_left = state[:, 0:7] # xyz(3) + rpy(3) + gripper(1)
ee_right = state[:, 7:14]
# Joint data is available for real-robot trajectories.
left_arm_joint_pos = state[:, 14:21] # 6 joints + gripper
right_arm_joint_pos = state[:, 35:42] # 6 joints + gripper
Citation
If you find this dataset useful in your research, please consider citing:
@misc{sun2026maniparena,
title={ManipArena: Comprehensive Real-world Evaluation of Reasoning-Oriented Generalist Robot Manipulation},
author={Yu Sun and Meng Cao and Ping Yang and Rongtao Xu and Yunxiao Yan and Runze Xu and Liang Ma and Roy Gan and Andy Zhai and Qingxuan Chen and Zunnan Xu and Hao Wang and Jincheng Yu and Lucy Liang and Qian Wang and Ivan Laptev and Ian D Reid and Xiaodan Liang},
year={2026},
eprint={2603.28545},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2603.28545},
}
License
Apache License 2.0
- Downloads last month
- 18,659