readmeTestTwo / README.md
RogersPyke's picture
Rename 2.md to README.md
5036d09 verified
---
task_categories:
- robotics
language:
- en
- zh
extra_gated_prompt: 'By accessing this dataset, you agree to cite the associated paper in your research/publications—see the ''Citation'' section for details. You agree to not use the dataset to conduct experiments that cause harm to human subjects.'
extra_gated_fields:
Country:
type: 'country'
description: 'e.g., ''Germany'', ''China'', ''United States'''
Company/Organization:
type: 'text'
description: 'e.g., ''ETH Zurich'', ''Boston Dynamics'', ''Independent Researcher'''
tags:
- RoboCOIN
- LeRobot
- Test
frame_range: 10K-100K
license: apache-2.0
configs:
- config_name: default
data_files: data/*/*.parquet
---
# Agibot_g1_Pick_apple
**UUID:** `550e8400-e29b-41d4-a716-446655440000`
![Dataset Preview](./assets/thumbnails/Agibot-g1_Pick_apple.jpg)
[Watch Video](./assets/videos/Agibot-g1_Pick_apple.mp4)
### Overview
| Metric | Value |
|--------|-------|
| **Total Frames** | 50000 |
| **Frame Rate (FPS)** | 30 |
| **Total Episodes** | 100 |
| **Total Tasks** | 100 |
### Primary Task Instruction
> pick up the apple from the table and place it into the basket.
### Robot Configuration
- **Robot Name:** `Agibot+G1edu-u3`
- **Robot Type:** `G1edu-u3`
- **Codebase Version:** `v2.1`
- **End-Effector Name:** `['Agibot+two_finger_gripper', 'Agibot+three_finger_gripper']`
- **End-Effector Type:** `two_finger_gripper, three_finger_gripper`
- **Teleoperation Type:** `cable, wireless`
## 🏠 Scene and Objects
### Scene Type
`home-kitchen`
### Objects
- `table-furniture-table`
- `basket-container-basket`
## 🎯 Task Descriptions
- **Standardized Task Name:** `Agibot_g1_Pick_apple`
- **Standardized Task Description:** `Left_arm+pick+apple`
- **Operation Type:** `simgle_arm`
### Sub-Tasks
This dataset includes 3 distinct subtasks:
1. **Grasp the apple with the left gripper**
2. **Place the apple into the basket with the left gripper**
3. **End**
### Atomic Actions
- `pick`
- `place`
- `grasp`
## 🛠️ Hardware & Sensors
### Sensors
- `Depth_camera`
- `RGB_camera`
- `IMU`
- `Force_sensor`
### Camera Information
['Camera 1: RGB, 1280x720, 30fps', 'Camera 2: RGB, 1280x720, 30fps', 'Camera 3: Depth, 640x480, 30fps']
### Coordinate System
- **Definition:** `right_hand_frame`
- **Origin (XYZ):** `[0, 0, 0]`
### Dimensions & Units
- **Joint Rotation:** `radian`
- **End-Effector Rotation:** `radian`
- **End-Effector Translation:** `meter`
- **Base Rotation:** `radian`
- **Base Translation:** `meter`
- **Operation Platform Height:** `77.2 cm`
## 📊 Dataset Statistics
| Metric | Value |
|--------|-------|
| **Total Episodes** | 100 |
| **Total Frames** | 50000 |
| **Total Tasks** | 100 |
| **Total Videos** | 100 |
| **Total Chunks** | 10 |
| **Chunk Size** | 10 |
| **FPS** | 30 |
| **Total Duration** | 27:46:40 |
| **Video Resolution** | 1280x720 |
| **State Dimensions** | 14 |
| **Action Dimensions** | 7 |
| **Camera Views** | 3 |
| **Dataset Size** | 2.7GB |
## 📂 Data Splits
The dataset is organized into the following splits:
- **Training**: Episodes 0:89
- **Validation**: Episodes 89:99
- **Test**: Episodes 99:109
## 📁 Dataset Structure
This dataset follows the LeRobot format and contains the following components:
### Data Files
- **Videos**: Compressed video files containing RGB camera observations
- **State Data**: Robot joint positions, velocities, and other state information
- **Action Data**: Robot action commands and trajectories
- **Metadata**: Episode metadata, timestamps, and annotations
### File Organization
- **Data Path Pattern**: `data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet`
- **Video Path Pattern**: `videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4`
- **Chunking**: Data is organized into 10 chunk(s)
of size 10
## 🎥 Camera Views
This dataset includes 3 camera views.
### Features Schema
The dataset includes the following features:
#### Visual Observations
- **observation.images.camera_1**: video
- FPS: 30
- Codec: h264
#### State and Action
- **observation.state**: float32
- **action**: float32
#### Temporal Information
- **timestamp**: float64
- **frame_index**: int64
- **episode_index**: int64
#### Annotations
- **subtask_annotation**: string
- **scene_annotation**: string
#### Motion Features
- **eef_sim_pose_state**: float32
- Dimensions: x, y, z, qx, qy, qz, qw
#### Gripper Features
### Meta Information
The complete dataset metadata is available in [meta/info.json](meta/info.json):
```json
{
"info": "Complete metadata available in meta/info.json"
}
```
### Directory Structure
The dataset is organized as follows (showing leaf directories with first 5 files only):
```
dataset/
├── data/
│ └── chunk-*/episode_*.parquet
├── videos/
│ └── chunk-*/camera_*/episode_*.mp4
├── meta/
│ └── info.json
└── README.md
```
## 🏷️ Available Annotations
This dataset includes rich annotations to support diverse learning approaches:
### Additional Features
- **End-Effector Simulation Pose**: 6D pose information for end-effectors in simulation space
- Available for both state and action
- **Gripper Opening Scale**: Continuous gripper opening measurements
- Available for both state and action
## 🏷️ Dataset Tags
- `RoboCOIN`
- `LeRobot`
## 👥 Authors
### Contributors
This dataset is contributed by:
- RoboCOIN
- [https://flagopen.github.io/RoboCOIN/](https://flagopen.github.io/RoboCOIN/)
- RoboCOIN Team
## 🔗 Links
- **🏠 Homepage:** [https://flagopen.github.io/RoboCOIN/](https://flagopen.github.io/RoboCOIN/)
- **📄 Paper:** [https://arxiv.org/abs/2511.17441](https://arxiv.org/abs/2511.17441)
- **💻 Repository:** [https://github.com/FlagOpen/RoboCOIN](https://github.com/FlagOpen/RoboCOIN)
- **📜 License:** apache-2.0
## 📞 Contact and Support
For questions, issues, or feedback regarding this dataset, please contact:
- **Email:** robocoin@baai.ac.cn
For questions, issues, or feedback regarding this dataset, please contact us.
### Support
For technical support, please open an issue on our GitHub repository.
## 📄 License
This dataset is released under the **apache-2.0** license.
Please refer to the LICENSE file for full license terms and conditions.
## 📚 Citation
If you use this dataset in your research, please cite:
```bibtex
@article{robocoin,
title={RoboCOIN: An Open-Sourced Bimanual Robotic Data Collection for Integrated Manipulation},
author={...},
journal={arXiv preprint arXiv:2511.17441},
url={https://arxiv.org/abs/2511.17441},
year={2025}
}
```
### Additional References
If you use this dataset, please also consider citing:
- LeRobot Framework: https://github.com/huggingface/lerobot
## 📌 Version Information
## Version History
- v1.0.0 (2025-11): Initial release