RogersPyke's picture
Upload dataset Agilex_Cobot_Magic_classify_objects_six
885f95c verified
---
task_categories:
- robotics
language:
- en
extra_gated_prompt: 'By accessing this dataset, you agree to cite the associated paper in your research/publications—see the "Citation" section for details. You agree to not use the dataset to conduct experiments that cause harm to human subjects.'
extra_gated_fields:
Company/Organization:
type: 'text'
description: 'e.g., "ETH Zurich", "Boston Dynamics", "Independent Researcher"'
Country:
type: 'country'
description: 'e.g., "Germany", "China", "United States"'
tags:
- RoboCOIN
- LeRobot
license: apache-2.0
configs:
- config_name: default
data_files: data/chunk-{id}/episode_{id}.parquet
---
# Agilex_Cobot_Magic_classify_objects_six
## Dataset Description
This dataset uses an extended format based on LeRobot and is fully compatible with LeRobot.
## Task Preview
<video src="videos/chunk-000/observation.images.cam_head_rgb/episode_000000.mp4" controls width="640"></video>
[View Video Directly](videos/chunk-000/observation.images.cam_head_rgb/episode_000000.mp4)
### Overview
- **Total Episodes:** 199
- **Total Frames:** 302506
- **FPS:** 30
- **Dataset Size:** 3.88 GB
- **Robot Name:** `Agilex_Cobot_Magic`
- **End-Effector Type:** `two_finger_gripper`
- **Teleoperation Type:** `Due to some reasons, this dataset temporarily cannot provide the teleoperation type information.`
- **Sensors:** `cam_head_rgb`,
`cam_left_wrist_rgb`,
`cam_right_wrist_rgb`
- **Camera Information:** cam_head_rgb;
cam_left_wrist_rgb;
cam_right_wrist_rgb
- **Scene:** `household->kitchen`
- **Objects:** `table(unknown)`,
`brown_basket(unknown)`,
`black_basket(unknown)`,
`bread(unknown)`,
`orange(unknown)`,
`green_lemon(unknown)`,
`pink_clear_plastic_cup(unknown)`,
`laundry_detergent(unknown)`,
`mentholatum_facial_cleanser(unknown)`
- **Task Description:** place multiple objects separately in different baskets.
### Primary Task Instruction
> place multiple objects separately in different baskets.
### Robot Configuration
- **Robot Name:** `Agilex_Cobot_Magic`
- **Codebase Version:** `v2.1`
- **End-Effector Type:** `two_finger_gripper`
- **Teleoperation Type:** `Due to some reasons, this dataset temporarily cannot provide the teleoperation type information.`
## Scene and Objects
### Scene Type
`household->kitchen`
### Objects
- `table(unknown)`
- `brown_basket(unknown)`
- `black_basket(unknown)`
- `bread(unknown)`
- `orange(unknown)`
- `green_lemon(unknown)`
- `pink_clear_plastic_cup(unknown)`
- `laundry_detergent(unknown)`
- `mentholatum_facial_cleanser(unknown)`
## Task Descriptions
- **Standardized Task Description:** `place multiple objects separately in different baskets.`
- **Operation Type:** `Due to some reasons, this dataset temporarily cannot provide the operation type information.`
- **Environment Type:** `Due to some reasons, this dataset temporarily cannot provide the environment type information.`
### Sub-Tasks
This dataset includes 28 distinct subtasks:
1. **Place the orange in the light basket with left gripper** (Index: 0)
2. **Grasp the xx with the right gripper** (Index: 1)
3. **Pick up the facial cleanser with left gripper** (Index: 2)
4. **End** (Index: 3)
5. **Place the facial cleanser in the dark basket with left gripper** (Index: 4)
6. **Place the XX into the basket on the left with the right gripper** (Index: 5)
7. **Place the lime in the light basket with right gripper** (Index: 6)
8. **Place the laundry detergent in the dark basket with right gripper** (Index: 7)
9. **Place the orange in the light basket with right gripper** (Index: 8)
10. **Pick up the lime with left gripper** (Index: 9)
11. **Place the XX into the basket on the right with the left gripper** (Index: 10)
12. **Place the XX into the basket on the left with the left gripper** (Index: 11)
13. **Place the XX into the basket on the right with the right gripper** (Index: 12)
14. **Grasp the xx with the left gripper** (Index: 13)
15. **Pick up the orange with left gripper** (Index: 14)
16. **Place the bread in the light basket with right gripper** (Index: 15)
17. **Pick up the laundry detergent with right gripper** (Index: 16)
18. **Abnormal** (Index: 17)
19. **Pick up the laundry detergent with left gripper** (Index: 18)
20. **Pick up the facial cleanser with right gripper** (Index: 19)
21. **Pick up the lime with right gripper** (Index: 20)
22. **Pick up the bread with right gripper** (Index: 21)
23. **Place the brown cup in the dark basket with left gripper** (Index: 22)
24. **Place the laundry detergent in the dark basket with left gripper** (Index: 23)
25. **Pick up the orange with right gripper** (Index: 24)
26. **Place the lime in the light basket with left gripper** (Index: 25)
27. **Pick up the brown cup with left gripper** (Index: 26)
28. **null** (Index: 27)
### Atomic Actions
- `grasp`
- `lift`
- `lower`
## Hardware and Sensors
### Sensors
- `cam_head_rgb`
- `cam_left_wrist_rgb`
- `cam_right_wrist_rgb`
### Camera Information
- `cam_head_rgb`: dtype=video, shape=480x640x3, resolution=640x480, codec=av1, pix_fmt=yuv420p
- `cam_left_wrist_rgb`: dtype=video, shape=480x640x3, resolution=640x480, codec=av1, pix_fmt=yuv420p
- `cam_right_wrist_rgb`: dtype=video, shape=480x640x3, resolution=640x480, codec=av1, pix_fmt=yuv420p
### Coordinate System
- **Definition:** `right-hand-frame`
### Dimensions & Units
- **Joint Rotation:** `radian`
- **End-Effector Rotation:** `radian`
- **End-Effector Translation:** `meter`
## Dataset Statistics
| Metric | Value |
|--------|-------|
| **Total Episodes** | 199 |
| **Total Frames** | 302506 |
| **Total Tasks** | 28 |
| **Total Videos** | 597 |
| **Total Chunks** | 1 |
| **Chunk Size** | 1000 |
| **FPS** | 30 |
| **State Dimensions** | 26 |
| **Action Dimensions** | 26 |
| **Camera Views** | 3 |
| **Dataset Size** | 3.88 GB |
## Data Splits
The dataset is organized into the following splits:
- **Training**: Episodes 0:198
## Dataset Structure
This dataset follows the LeRobot format and contains the following components:
### Data Files
- **Videos**: Compressed video files containing RGB camera observations
- **State Data**: Robot joint positions, velocities, and other state information
- **Action Data**: Robot action commands and trajectories
- **Metadata**: Episode metadata, timestamps, and annotations
### File Organization
- **Data Path Pattern**: `data/chunk-{id}/episode_{id}.parquet`
- **Video Path Pattern**: `videos/chunk-{id}/observation.images.cam_left_wrist_rgb/episode_{id}.mp{id}`
- **Chunking**: Data is organized into 1 chunk(s)
of size 1000
### Data Structure (Tree)
```
Agilex_Cobot_Magic_classify_objects_six_qced_hardlink/
|-- annotations
| |-- eef_acc_mag_annotation.jsonl
| |-- eef_direction_annotation.jsonl
| |-- eef_velocity_annotation.jsonl
| |-- gripper_activity_annotation.jsonl
| |-- gripper_mode_annotation.jsonl
| |-- scene_annotations.jsonl
| `-- subtask_annotations.jsonl
|-- data
| `-- chunk-000
| |-- episode_000000.parquet
| |-- episode_000001.parquet
| |-- episode_000002.parquet
| |-- episode_000003.parquet
| |-- episode_000004.parquet
| |-- episode_000005.parquet
| |-- episode_000006.parquet
| |-- episode_000007.parquet
| |-- episode_000008.parquet
| |-- episode_000009.parquet
| |-- episode_000010.parquet
| `-- episode_000011.parquet
| `-- ... (187 more entries)
|-- meta
| |-- episodes.jsonl
| |-- episodes_stats.jsonl
| |-- info.json
| `-- tasks.jsonl
|-- videos
| `-- chunk-000
| |-- observation.images.cam_head_rgb
| |-- observation.images.cam_left_wrist_rgb
| `-- observation.images.cam_right_wrist_rgb
|-- info.yaml
`-- README.md
```
## Camera Views
This dataset includes 3 camera views: `cam_head_rgb`, `cam_left_wrist_rgb`, `cam_right_wrist_rgb`.
## Features (Full YAML)
```yaml
observation.images.cam_head_rgb:
dtype: video
shape:
- 480
- 640
- 3
names:
- height
- width
- channels
info:
video.height: 480
video.width: 640
video.codec: av1
video.pix_fmt: yuv420p
video.is_depth_map: false
video.fps: 30
video.channels: 3
has_audio: false
observation.images.cam_left_wrist_rgb:
dtype: video
shape:
- 480
- 640
- 3
names:
- height
- width
- channels
info:
video.height: 480
video.width: 640
video.codec: av1
video.pix_fmt: yuv420p
video.is_depth_map: false
video.fps: 30
video.channels: 3
has_audio: false
observation.images.cam_right_wrist_rgb:
dtype: video
shape:
- 480
- 640
- 3
names:
- height
- width
- channels
info:
video.height: 480
video.width: 640
video.codec: av1
video.pix_fmt: yuv420p
video.is_depth_map: false
video.fps: 30
video.channels: 3
has_audio: false
observation.state:
dtype: float32
shape:
- 26
names:
- left_arm_joint_1_rad
- left_arm_joint_2_rad
- left_arm_joint_3_rad
- left_arm_joint_4_rad
- left_arm_joint_5_rad
- left_arm_joint_6_rad
- left_gripper_open
- left_eef_pos_x_m
- left_eef_pos_y_m
- left_eef_pos_z_m
- left_eef_rot_euler_x_rad
- left_eef_rot_euler_y_rad
- left_eef_rot_euler_z_rad
- right_arm_joint_1_rad
- right_arm_joint_2_rad
- right_arm_joint_3_rad
- right_arm_joint_4_rad
- right_arm_joint_5_rad
- right_arm_joint_6_rad
- right_gripper_open
- right_eef_pos_x_m
- right_eef_pos_y_m
- right_eef_pos_z_m
- right_eef_rot_euler_x_rad
- right_eef_rot_euler_y_rad
- right_eef_rot_euler_z_rad
action:
dtype: float32
shape:
- 26
names:
- left_arm_joint_1_rad
- left_arm_joint_2_rad
- left_arm_joint_3_rad
- left_arm_joint_4_rad
- left_arm_joint_5_rad
- left_arm_joint_6_rad
- left_gripper_open
- left_eef_pos_x_m
- left_eef_pos_y_m
- left_eef_pos_z_m
- left_eef_rot_euler_x_rad
- left_eef_rot_euler_y_rad
- left_eef_rot_euler_z_rad
- right_arm_joint_1_rad
- right_arm_joint_2_rad
- right_arm_joint_3_rad
- right_arm_joint_4_rad
- right_arm_joint_5_rad
- right_arm_joint_6_rad
- right_gripper_open
- right_eef_pos_x_m
- right_eef_pos_y_m
- right_eef_pos_z_m
- right_eef_rot_euler_x_rad
- right_eef_rot_euler_y_rad
- right_eef_rot_euler_z_rad
timestamp:
dtype: float32
shape:
- 1
names: null
frame_index:
dtype: int64
shape:
- 1
names: null
episode_index:
dtype: int64
shape:
- 1
names: null
index:
dtype: int64
shape:
- 1
names: null
task_index:
dtype: int64
shape:
- 1
names: null
subtask_annotation:
names: null
dtype: int32
shape:
- 5
scene_annotation:
names: null
dtype: int32
shape:
- 1
eef_sim_pose_state:
names:
- left_eef_pos_x
- left_eef_pos_y
- left_eef_pos_z
- left_eef_rot_x
- left_eef_rot_y
- left_eef_rot_z
- right_eef_pos_x
- right_eef_pos_y
- right_eef_pos_z
- right_eef_rot_x
- right_eef_rot_y
- right_eef_rot_z
dtype: float32
shape:
- 12
eef_sim_pose_action:
names:
- left_eef_pos_x
- left_eef_pos_y
- left_eef_pos_z
- left_eef_rot_x
- left_eef_rot_y
- left_eef_rot_z
- right_eef_pos_x
- right_eef_pos_y
- right_eef_pos_z
- right_eef_rot_x
- right_eef_rot_y
- right_eef_rot_z
dtype: float32
shape:
- 12
eef_direction_state:
names:
- left_eef_direction
- right_eef_direction
dtype: int32
shape:
- 2
eef_direction_action:
names:
- left_eef_direction
- right_eef_direction
dtype: int32
shape:
- 2
eef_velocity_state:
names:
- left_eef_velocity
- right_eef_velocity
dtype: int32
shape:
- 2
eef_velocity_action:
names:
- left_eef_velocity
- right_eef_velocity
dtype: int32
shape:
- 2
eef_acc_mag_state:
names:
- left_eef_acc_mag
- right_eef_acc_mag
dtype: int32
shape:
- 2
eef_acc_mag_action:
names:
- left_eef_acc_mag
- right_eef_acc_mag
dtype: int32
shape:
- 2
gripper_mode_state:
names:
- left_gripper_mode
- right_gripper_mode
dtype: int32
shape:
- 2
gripper_mode_action:
names:
- left_gripper_mode
- right_gripper_mode
dtype: int32
shape:
- 2
gripper_activity_state:
names:
- left_gripper_activity
- right_gripper_activity
dtype: int32
shape:
- 2
gripper_activity_action:
names:
- left_gripper_activity
- right_gripper_activity
dtype: int32
shape:
- 2
gripper_open_scale_state:
names:
- left_gripper_open_scale
- right_gripper_open_scale
dtype: float32
shape:
- 2
gripper_open_scale_action:
names:
- left_gripper_open_scale
- right_gripper_open_scale
dtype: float32
shape:
- 2
```
## Available Annotations
This dataset includes rich annotations to support diverse learning approaches:
- `eef_acc_mag_annotation.jsonl`
- `eef_direction_annotation.jsonl`
- `eef_velocity_annotation.jsonl`
- `gripper_activity_annotation.jsonl`
- `gripper_mode_annotation.jsonl`
- `scene_annotations.jsonl`
- `subtask_annotations.jsonl`
## Dataset Tags
- `RoboCOIN`
- `LeRobot`
## Authors
### Contributors
This dataset is contributed by:-RoboCOIN Team at Beijing Academy of Artificial Intelligence (BAAI)
### Annotators
No annotator information available.
## Links
- **Homepage:** [https://flagopen.github.io/RoboCOIN/](https://flagopen.github.io/RoboCOIN/)
- **Paper:** [https://arxiv.org/abs/2511.17441](https://arxiv.org/abs/2511.17441)
- **Repository:** [https://github.com/FlagOpen/RoboCOIN](https://github.com/FlagOpen/RoboCOIN)
## Contact and Support
For questions, issues, or feedback regarding this dataset, please contact us.
### Support
For technical support, please open an issue on our GitHub repository.
## License
apache-2.0
## Citation
If you use this dataset in your research, please cite:
```bibtex
@article{robocoin,
title={RoboCOIN: An Open-Sourced Bimanual Robotic Data Collection for Integrated Manipulation},
author={Shihan Wu, Xuecheng Liu, Shaoxuan Xie, Pengwei Wang, Xinghang Li, Bowen Yang, Zhe Li, Kai Zhu, Hongyu Wu, Yiheng Liu, Zhaoye Long, Yue Wang, Chong Liu, Dihan Wang, Ziqiang Ni, Xiang Yang, You Liu, Ruoxuan Feng, Runtian Xu, Lei Zhang, Denghang Huang, Chenghao Jin, Anlan Yin, Xinlong Wang, Zhenguo Sun, Junkai Zhao, Mengfei Du, Mingyu Cao, Xiansheng Chen, Hongyang Cheng, Xiaojie Zhang, Yankai Fu, Ning Chen, Cheng Chi, Sixiang Chen, Huaihai Lyu, Xiaoshuai Hao, Yequan Wang, Bo Lei, Dong Liu, Xi Yang, Yance Jiao, Tengfei Pan, Yunyan Zhang, Songjing Wang, Ziqian Zhang, Xu Liu, Ji Zhang, Caowei Meng, Zhizheng Zhang, Jiyang Gao, Song Wang, Xiaokun Leng, Zhiqiang Xie, Zhenzhen Zhou, Peng Huang, Wu Yang, Yandong Guo, Yichao Zhu, Suibing Zheng, Hao Cheng, Xinmin Ding, Yang Yue, Huanqian Wang, Chi Chen, Jingrui Pang, YuXi Qian, Haoran Geng, Lianli Gao, Haiyuan Li, Bin Fang, Gao Huang, Yaodong Yang, Hao Dong, He Wang, Hang Zhao, Yadong Mu, Di Hu, Hao Zhao, Tiejun Huang, Shanghang Zhang, Yonghua Lin, Zhongyuan Wang and Guocai Yao},
journal={arXiv preprint arXiv:2511.17441},
url = {https://arxiv.org/abs/2511.17441},
year={2025},
}
```
### Additional References
If you use this dataset, please also consider citing:
LeRobot Framework: https://github.com/huggingface/lerobot
## Version Information
Initial Release