task_categories: - robotics language: - en tags: - RoboCOIN - LeRobot license: apache-2.0 configs: - config_name: default data_files: data/chunk-{id}/episode_{id}.parquet extra_gated_prompt: By accessing this dataset, you agree to cite the associated paper in your research/publications—see the "Citation" section for details. You agree to not use the dataset to conduct experiments that cause harm to human subjects. extra_gated_fields: Company/Organization: type: text description: e.g., "ETH Zurich", "Boston Dynamics", "Independent Researcher" Country: type: country description: e.g., "Germany", "China", "United States" codebase_version: v2.1 dataset_name: Agilex_Cobot_Magic_classify_objects_six dataset_uuid: 00000000-0000-0000-0000-000000000000 scene_type: level1: household level2: kitchen level3: null level4: null level5: null env_type: Due to some reasons, this dataset temporarily cannot provide the environment type information. objects: - object_name: table level1: home_storage level2: table level3: null level4: null level5: null - object_name: brown_basket level1: home_storage level2: brown_basket level3: null level4: null level5: null - object_name: black_basket level1: food level2: black_basket level3: null level4: null level5: null - object_name: bread level1: food level2: bread level3: null level4: null level5: null - object_name: orange level1: food level2: orange level3: null level4: null level5: null - object_name: green_lemon level1: food level2: green_lemon level3: null level4: null level5: null - object_name: pink_clear_plastic_cup level1: kitchen_supplies level2: pink_clear_plastic_cup level3: null level4: null level5: null - object_name: laundry_detergent level1: daily_necessities level2: laundry_detergent level3: null level4: null level5: null - object_name: mentholatum_facial_cleanser level1: daily_necessities level2: Mmentholatum_facial_cleanser level3: null level4: null level5: null task_operation_type: Due to some reasons, this dataset temporarily cannot provide the operation type information. task_instruction: - place multiple objects separately in different baskets. sub_tasks: - subtask: Place the orange in the light basket with left gripper subtask_index: 0 - subtask: Grasp the xx with the right gripper subtask_index: 1 - subtask: Pick up the facial cleanser with left gripper subtask_index: 2 - subtask: End subtask_index: 3 - subtask: Place the facial cleanser in the dark basket with left gripper subtask_index: 4 - subtask: Place the XX into the basket on the left with the right gripper subtask_index: 5 - subtask: Place the lime in the light basket with right gripper subtask_index: 6 - subtask: Place the laundry detergent in the dark basket with right gripper subtask_index: 7 - subtask: Place the orange in the light basket with right gripper subtask_index: 8 - subtask: Pick up the lime with left gripper subtask_index: 9 - subtask: Place the XX into the basket on the right with the left gripper subtask_index: 10 - subtask: Place the XX into the basket on the left with the left gripper subtask_index: 11 - subtask: Place the XX into the basket on the right with the right gripper subtask_index: 12 - subtask: Grasp the xx with the left gripper subtask_index: 13 - subtask: Pick up the orange with left gripper subtask_index: 14 - subtask: Place the bread in the light basket with right gripper subtask_index: 15 - subtask: Pick up the laundry detergent with right gripper subtask_index: 16 - subtask: Abnormal subtask_index: 17 - subtask: Pick up the laundry detergent with left gripper subtask_index: 18 - subtask: Pick up the facial cleanser with right gripper subtask_index: 19 - subtask: Pick up the lime with right gripper subtask_index: 20 - subtask: Pick up the bread with right gripper subtask_index: 21 - subtask: Place the brown cup in the dark basket with left gripper subtask_index: 22 - subtask: Place the laundry detergent in the dark basket with left gripper subtask_index: 23 - subtask: Pick up the orange with right gripper subtask_index: 24 - subtask: Place the lime in the light basket with left gripper subtask_index: 25 - subtask: Pick up the brown cup with left gripper subtask_index: 26 - subtask: 'null' subtask_index: 27 atomic_actions: - grasp - lift - lower robot_name: - Agilex_Cobot_Magic end_effector_type: two_finger_gripper tele_type: Due to some reasons, this dataset temporarily cannot provide the teleoperation type information. sensor_list: - cam_head_rgb - cam_left_wrist_rgb - cam_right_wrist_rgb came_info: cam_head_rgb: dtype=video, shape=480x640x3, resolution=640x480, codec=av1, pix_fmt=yuv420p cam_left_wrist_rgb: dtype=video, shape=480x640x3, resolution=640x480, codec=av1, pix_fmt=yuv420p cam_right_wrist_rgb: dtype=video, shape=480x640x3, resolution=640x480, codec=av1, pix_fmt=yuv420p depth_enabled: false coordinate_definition: right-hand-frame joint_rotation_dim: radian end_rotation_dim: radian end_translation_dim: meter annotations: - eef_acc_mag_annotation.jsonl - eef_direction_annotation.jsonl - eef_velocity_annotation.jsonl - gripper_activity_annotation.jsonl - gripper_mode_annotation.jsonl - scene_annotations.jsonl - subtask_annotations.jsonl statistics: total_episodes: 199 total_frames: 302506 fps: 30 total_tasks: 28 total_videos: 597 total_chunks: 1 chunks_size: 1000 state_dim: 26 action_dim: 26 camera_views: 3 dataset_size: 3.88 GB frame_num: 302506 dataset_size: 3.88 GB data_structure: 'Agilex_Cobot_Magic_classify_objects_six_qced_hardlink/ |-- annotations | |-- eef_acc_mag_annotation.jsonl | |-- eef_direction_annotation.jsonl | |-- eef_velocity_annotation.jsonl | |-- gripper_activity_annotation.jsonl | |-- gripper_mode_annotation.jsonl | |-- scene_annotations.jsonl | `-- subtask_annotations.jsonl |-- data | `-- chunk-000 | |-- episode_000000.parquet | |-- episode_000001.parquet | |-- episode_000002.parquet | |-- episode_000003.parquet | |-- episode_000004.parquet | |-- episode_000005.parquet | |-- episode_000006.parquet | |-- episode_000007.parquet | |-- episode_000008.parquet | |-- episode_000009.parquet | |-- episode_000010.parquet | `-- episode_000011.parquet | `-- ... (187 more entries) |-- meta | |-- episodes.jsonl | |-- episodes_stats.jsonl | |-- info.json | `-- tasks.jsonl |-- videos | `-- chunk-000 | |-- observation.images.cam_head_rgb | |-- observation.images.cam_left_wrist_rgb | `-- observation.images.cam_right_wrist_rgb |-- info.yaml `-- README.md' splits: train: 0:198 features: observation.images.cam_head_rgb: dtype: video shape: - 480 - 640 - 3 names: - height - width - channels info: video.height: 480 video.width: 640 video.codec: av1 video.pix_fmt: yuv420p video.is_depth_map: false video.fps: 30 video.channels: 3 has_audio: false observation.images.cam_left_wrist_rgb: dtype: video shape: - 480 - 640 - 3 names: - height - width - channels info: video.height: 480 video.width: 640 video.codec: av1 video.pix_fmt: yuv420p video.is_depth_map: false video.fps: 30 video.channels: 3 has_audio: false observation.images.cam_right_wrist_rgb: dtype: video shape: - 480 - 640 - 3 names: - height - width - channels info: video.height: 480 video.width: 640 video.codec: av1 video.pix_fmt: yuv420p video.is_depth_map: false video.fps: 30 video.channels: 3 has_audio: false observation.state: dtype: float32 shape: - 26 names: - left_arm_joint_1_rad - left_arm_joint_2_rad - left_arm_joint_3_rad - left_arm_joint_4_rad - left_arm_joint_5_rad - left_arm_joint_6_rad - left_gripper_open - left_eef_pos_x_m - left_eef_pos_y_m - left_eef_pos_z_m - left_eef_rot_euler_x_rad - left_eef_rot_euler_y_rad - left_eef_rot_euler_z_rad - right_arm_joint_1_rad - right_arm_joint_2_rad - right_arm_joint_3_rad - right_arm_joint_4_rad - right_arm_joint_5_rad - right_arm_joint_6_rad - right_gripper_open - right_eef_pos_x_m - right_eef_pos_y_m - right_eef_pos_z_m - right_eef_rot_euler_x_rad - right_eef_rot_euler_y_rad - right_eef_rot_euler_z_rad action: dtype: float32 shape: - 26 names: - left_arm_joint_1_rad - left_arm_joint_2_rad - left_arm_joint_3_rad - left_arm_joint_4_rad - left_arm_joint_5_rad - left_arm_joint_6_rad - left_gripper_open - left_eef_pos_x_m - left_eef_pos_y_m - left_eef_pos_z_m - left_eef_rot_euler_x_rad - left_eef_rot_euler_y_rad - left_eef_rot_euler_z_rad - right_arm_joint_1_rad - right_arm_joint_2_rad - right_arm_joint_3_rad - right_arm_joint_4_rad - right_arm_joint_5_rad - right_arm_joint_6_rad - right_gripper_open - right_eef_pos_x_m - right_eef_pos_y_m - right_eef_pos_z_m - right_eef_rot_euler_x_rad - right_eef_rot_euler_y_rad - right_eef_rot_euler_z_rad timestamp: dtype: float32 shape: - 1 names: null frame_index: dtype: int64 shape: - 1 names: null episode_index: dtype: int64 shape: - 1 names: null index: dtype: int64 shape: - 1 names: null task_index: dtype: int64 shape: - 1 names: null subtask_annotation: names: null dtype: int32 shape: - 5 scene_annotation: names: null dtype: int32 shape: - 1 eef_sim_pose_state: names: - left_eef_pos_x - left_eef_pos_y - left_eef_pos_z - left_eef_rot_x - left_eef_rot_y - left_eef_rot_z - right_eef_pos_x - right_eef_pos_y - right_eef_pos_z - right_eef_rot_x - right_eef_rot_y - right_eef_rot_z dtype: float32 shape: - 12 eef_sim_pose_action: names: - left_eef_pos_x - left_eef_pos_y - left_eef_pos_z - left_eef_rot_x - left_eef_rot_y - left_eef_rot_z - right_eef_pos_x - right_eef_pos_y - right_eef_pos_z - right_eef_rot_x - right_eef_rot_y - right_eef_rot_z dtype: float32 shape: - 12 eef_direction_state: names: - left_eef_direction - right_eef_direction dtype: int32 shape: - 2 eef_direction_action: names: - left_eef_direction - right_eef_direction dtype: int32 shape: - 2 eef_velocity_state: names: - left_eef_velocity - right_eef_velocity dtype: int32 shape: - 2 eef_velocity_action: names: - left_eef_velocity - right_eef_velocity dtype: int32 shape: - 2 eef_acc_mag_state: names: - left_eef_acc_mag - right_eef_acc_mag dtype: int32 shape: - 2 eef_acc_mag_action: names: - left_eef_acc_mag - right_eef_acc_mag dtype: int32 shape: - 2 gripper_mode_state: names: - left_gripper_mode - right_gripper_mode dtype: int32 shape: - 2 gripper_mode_action: names: - left_gripper_mode - right_gripper_mode dtype: int32 shape: - 2 gripper_activity_state: names: - left_gripper_activity - right_gripper_activity dtype: int32 shape: - 2 gripper_activity_action: names: - left_gripper_activity - right_gripper_activity dtype: int32 shape: - 2 gripper_open_scale_state: names: - left_gripper_open_scale - right_gripper_open_scale dtype: float32 shape: - 2 gripper_open_scale_action: names: - left_gripper_open_scale - right_gripper_open_scale dtype: float32 shape: - 2 authors: contributed_by: - name: RoboCOIN Team at Beijing Academy of Artificial Intelligence (BAAI) dataset_description: This dataset uses an extended format based on LeRobot and is fully compatible with LeRobot. homepage: https://flagopen.github.io/RoboCOIN/ paper: https://arxiv.org/abs/2511.17441 repository: https://github.com/FlagOpen/RoboCOIN contact_info: For questions, issues, or feedback regarding this dataset, please contact us. support_info: For technical support, please open an issue on our GitHub repository. license_details: apache-2.0 citation_bibtex: "@article{robocoin,\n title={RoboCOIN: An Open-Sourced Bimanual\ \ Robotic Data Collection for Integrated Manipulation},\n author={Shihan Wu, Xuecheng\ \ Liu, Shaoxuan Xie, Pengwei Wang, Xinghang Li, Bowen Yang, Zhe Li, Kai Zhu, Hongyu\ \ Wu, Yiheng Liu, Zhaoye Long, Yue Wang, Chong Liu, Dihan Wang, Ziqiang Ni, Xiang\ \ Yang, You Liu, Ruoxuan Feng, Runtian Xu, Lei Zhang, Denghang Huang, Chenghao Jin,\ \ Anlan Yin, Xinlong Wang, Zhenguo Sun, Junkai Zhao, Mengfei Du, Mingyu Cao, Xiansheng\ \ Chen, Hongyang Cheng, Xiaojie Zhang, Yankai Fu, Ning Chen, Cheng Chi, Sixiang\ \ Chen, Huaihai Lyu, Xiaoshuai Hao, Yequan Wang, Bo Lei, Dong Liu, Xi Yang, Yance\ \ Jiao, Tengfei Pan, Yunyan Zhang, Songjing Wang, Ziqian Zhang, Xu Liu, Ji Zhang,\ \ Caowei Meng, Zhizheng Zhang, Jiyang Gao, Song Wang, Xiaokun Leng, Zhiqiang Xie,\ \ Zhenzhen Zhou, Peng Huang, Wu Yang, Yandong Guo, Yichao Zhu, Suibing Zheng, Hao\ \ Cheng, Xinmin Ding, Yang Yue, Huanqian Wang, Chi Chen, Jingrui Pang, YuXi Qian,\ \ Haoran Geng, Lianli Gao, Haiyuan Li, Bin Fang, Gao Huang, Yaodong Yang, Hao Dong,\ \ He Wang, Hang Zhao, Yadong Mu, Di Hu, Hao Zhao, Tiejun Huang, Shanghang Zhang,\ \ Yonghua Lin, Zhongyuan Wang and Guocai Yao},\n journal={arXiv preprint arXiv:2511.17441},\n\ \ url = {https://arxiv.org/abs/2511.17441},\n year={2025},\n }\n" additional_citations: 'If you use this dataset, please also consider citing: LeRobot Framework: https://github.com/huggingface/lerobot ' version_info: Initial Release data_path: data/chunk-{id}/episode_{id}.parquet video_path: videos/chunk-{id}/observation.images.cam_left_wrist_rgb/episode_{id}.mp{id} video_url: videos/chunk-000/observation.images.cam_head_rgb/episode_000000.mp4