| | --- |
| | license: mit |
| | task_categories: |
| | - robotics |
| | tags: |
| | - LeRobot |
| | configs: |
| | - config_name: default |
| | data_files: FlattenFold/base/data/chunk-000/episode_000000.parquet |
| | --- |
| | <span style="color: red; font-weight: bold; font-size: 24px;">⚠️ !!! 等待信息,填充链接</span> |
| | <div align="center"> |
| | <a href=""> |
| | <img src="https://img.shields.io/badge/GitHub-grey?logo=GitHub" alt="GitHub Badge"> |
| | </a> |
| | <a href=""> |
| | <img src="https://img.shields.io/badge/Project%20Page-blue?style=plastic" alt="Project Page Badge"> |
| | </a> |
| | <a href="https://mmlab.hk/research/kai0"> |
| | <img src="https://img.shields.io/badge/Research_Blog-black?style=flat" alt="Research Blog Badge"> |
| | </a> |
| | </div> |
| | |
| |
|
| | # Contents |
| | - [About the Dataset](#about-the-dataset) |
| | - [Dataset Structure](#dataset-structure) |
| | - [Folder hierarchy](#folder-hierarchy) |
| | - [Details](#details) |
| | - [Download the Dataset](#download-the-dataset) |
| | - [Load the Dataset](#get-started) |
| | - [License and Citation](#license-and-citation) |
| |
|
| | # [About the Dataset](#contents) |
| | - This dataset was created using [LeRobot](https://github.com/huggingface/lerobot) |
| | - **~130 hours** real world scenarios |
| | - **Main Tasks** |
| | - ***FlattenFold*** |
| | - Single task |
| | - Initial state: T-shirts are randomly tossed onto the table, presenting random crumpled configurations |
| | - Manipulation task: Operate the robotic arm to unfold the garment, then fold it |
| | - ***HangCloth*** |
| | - Single task |
| | - Initial state: Hanger is randomly placed, garment is randomly positioned on the table |
| | - Manipulation task: Operate the robotic arm to thread the hanger through the garment, then hang it on the rod |
| | - ***TeeShirtSort*** |
| | - Garment classification and arrangement task |
| | - Initial state: Randomly pick a garment from the laundry basket |
| | - Classification: Determine whether the garment is a T-shirt or a dress shirt |
| | - Manipulation task: |
| | - If it is a T-shirt, fold the garment |
| | - If it is a dress shirt, expose the collar, then push it to one side of the table |
| | - **Count of the dataset** |
| | | Task | Base (episodes) | DAgger (episodes) | Total | |
| | |------|-----------------|-------------------|-------| |
| | | FlattenFold | 3,055 | 3,457 | 6,512 | |
| | | HangCloth | 6954 | 686 | 7640 | |
| | | TeeShirtSort | 5988 | - | 5988 | |
| | | **Total** | **19,608** | **4,143** | **23,751** | |
| | # [Dataset Structure](#contents) |
| | |
| | ## [Folder hierarchy](#contents) |
| | Under each task directory, data is partitioned into two subsets: base and dagger. |
| | - base |
| | contains |
| | original demonstration trajectories of robotic arm manipulation for garment arrangement tasks. |
| | - dagger |
| | contains on-policy recovery trajectories collected via iterative DAgger, designed to populate failure recovery modes absent in static demonstrations. |
| | ```text |
| | Kai0-data/ |
| | ├── FlattenFold/ |
| | │ ├── base/ |
| | │ │ ├── data/ |
| | │ │ │ ├── chunk-000/ |
| | │ │ │ │ ├── episode_000000.parquet |
| | │ │ │ │ ├── episode_000001.parquet |
| | │ │ │ │ └── ... |
| | │ │ │ └── ... |
| | │ │ ├── videos/ |
| | │ │ │ ├── chunk-000/ |
| | │ │ │ │ ├── observation.images.hand_left/ |
| | │ │ │ │ │ ├── episode_000000.mp4 |
| | │ │ │ │ │ ├── episode_000001.mp4 |
| | │ │ │ │ │ └── ... |
| | │ │ │ │ ├── observation.images.hand_right/ |
| | │ │ │ │ │ ├── episode_000000.mp4 |
| | │ │ │ │ │ ├── episode_000001.mp4 |
| | │ │ │ │ │ └── ... |
| | │ │ │ │ ├── observation.images.top_head/ |
| | │ │ │ │ │ ├── episode_000000.mp4 |
| | │ │ │ │ │ ├── episode_000001.mp4 |
| | │ │ │ │ │ └── ... |
| | │ │ │ │ └── ... |
| | │ │ │ └── ... |
| | │ │ └── meta/ |
| | │ │ ├── info.json |
| | │ │ ├── episodes.jsonl |
| | │ │ ├── tasks.jsonl |
| | │ │ └── episodes_stats.jsonl |
| | │ └── dagger/ |
| | ├── HangCloth/ |
| | │ ├── base/ |
| | │ └── dagger/ |
| | ├── TeeShirtSort/ |
| | │ ├── base/ |
| | │ └── dagger/ |
| | └── README.md |
| | ``` |
| |
|
| | <a id='Details'></a> |
| | ## [Details](#contents) |
| | ### info.json |
| | the basic struct of the [info.json](#meta/info.json) |
| | ```json |
| | { |
| | "codebase_version": "v2.1", |
| | "robot_type": "agilex", |
| | "total_episodes": ..., |
| | "total_frames": ..., |
| | "total_tasks": ..., |
| | "total_videos": ..., |
| | "total_chunks": ..., |
| | "chunks_size": ..., |
| | "fps": ..., |
| | "splits": { |
| | "train": ... |
| | }, |
| | "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", |
| | "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", |
| | "features": { |
| | "observation.images.top_head": { |
| | "dtype": "video", |
| | "shape": [ |
| | 480, |
| | 640, |
| | 3 |
| | ], |
| | "names": [ |
| | "height", |
| | "width", |
| | "channel" |
| | ], |
| | "info": { |
| | "video.height": 480, |
| | "video.width": 640, |
| | "video.codec": "av1", |
| | "video.pix_fmt": "yuv420p", |
| | "video.is_depth_map": false, |
| | "video.fps": 30, |
| | "video.channels": 3, |
| | "has_audio": false |
| | } |
| | }, |
| | "observation.images.hand_left": { |
| | ... |
| | }, |
| | "observation.images.hand_right": { |
| | ... |
| | }, |
| | "observation.state": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 14 |
| | ], |
| | "names": null |
| | }, |
| | "action": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 14 |
| | ], |
| | "names": null |
| | }, |
| | "timestamp": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "frame_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "episode_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "task_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | } |
| | } |
| | } |
| | ``` |
| |
|
| | ### [Parquet file format](#contents) |
| | | Field Name | shape | Meaning | |
| | |------------|-------------|-------------| |
| | | observation.state | [N, 14] |left `[:, :6]`, right `[:, 7:13]`, joint angle<br> left`[:, 6]`, right `[:, 13]` , gripper open range| |
| | | action | [N, 14] |left `[:, :6]`, right `[:, 7:13]`, joint angle<br>left`[:, 6]`, right `[:, 13]` , gripper open range | |
| | | timestamp | [N, 1] | Time elapsed since the start of the episode (in seconds) | |
| | | frame_index | [N, 1] | Index of this frame within the current episode (0-indexed) | |
| | | episode_index | [N, 1] | Index of the episode this frame belongs to | |
| | | index | [N, 1] | Global unique index across all frames in the dataset | |
| | | task_index | [N, 1] | Index identifying the task type being performed | |
| | |
| | ## [tasks.jsonl](#FlattenFold/meta/tasks.jsonl) |
| | Contains task language prompts (natural language instructions) that specify the manipulation task to be performed. Each entry maps a task_index to its corresponding task description, which can be used for language-conditioned policy training. |
| | # [Download the Dataset](#contents) |
| | ### Python Script |
| |
|
| | ```python |
| | from huggingface_hub import hf_hub_download, snapshot_download |
| | from datasets import load_dataset |
| | |
| | # Download a single file |
| | hf_hub_download( |
| | repo_id="OpenDriveLab-org/kai0", |
| | filename="episodes.jsonl", |
| | subfolder="meta", |
| | repo_type="dataset", |
| | local_dir="where/you/want/to/save" |
| | ) |
| | |
| | # Download a specific folder |
| | snapshot_download( |
| | repo_id="OpenDriveLab-org/kai0", |
| | local_dir="/where/you/want/to/save", |
| | repo_type="dataset", |
| | allow_patterns=["data/*"] |
| | ) |
| | |
| | # Load the entire dataset |
| | dataset = load_dataset("OpenDriveLab-org/kai0") |
| | ``` |
| |
|
| | ### Terminal (CLI) |
| |
|
| | ```bash |
| | # Download a single file |
| | hf download OpenDriveLab-org/kai0 \ |
| | --include "meta/info.json" \ |
| | --repo-type dataset \ |
| | --local-dir "/where/you/want/to/save" |
| | |
| | # Download a specific folder |
| | hf download OpenDriveLab-org/kai0 \ |
| | --repo-type dataset \ |
| | --include "meta/*" \ |
| | --local-dir "/where/you/want/to/save" |
| | |
| | # Download the entire dataset |
| | hf download OpenDriveLab-org/kai0 \ |
| | --repo-type dataset \ |
| | --local-dir "/where/you/want/to/save" |
| | ``` |
| |
|
| | # [Load the dataset](#contents) |
| |
|
| | ## For LeRobot version < 0.4.0 |
| |
|
| | Choose the appropriate import based on your version: |
| |
|
| | | Version | Import Path | |
| | |---------|-------------| |
| | | `<= 0.1.0` | `from lerobot.common.datasets.lerobot_dataset import LeRobotDataset` | |
| | | `> 0.1.0` and `< 0.4.0` | `from lerobot.datasets.lerobot_dataset import LeRobotDataset` | |
| |
|
| | ```python |
| | # For version <= 0.1.0 |
| | from lerobot.common.datasets.lerobot_dataset import LeRobotDataset |
| | |
| | # For version > 0.1.0 and < 0.4.0 |
| | from lerobot.datasets.lerobot_dataset import LeRobotDataset |
| | |
| | # Load the dataset |
| | dataset = LeRobotDataset(repo_id='where/the/dataset/you/stored') |
| | ``` |
| |
|
| | ## For LeRobot version >= 0.4.0 |
| |
|
| | You need to migrate the dataset from v2.1 to v3.0 first. See the official documentation: [Migrate the dataset from v2.1 to v3.0](https://huggingface.co/docs/lerobot/lerobot-dataset-v3) |
| |
|
| | ```bash |
| | python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=<HF_USER/DATASET_ID> |
| | ``` |
| | <span style="color: red; font-weight: bold; font-size: 24px;">⚠️ !!! 等待信息填充</span> |
| | # License and Citation |
| | All the data and code within this repo are under [](). Please consider citing our project if it helps your research. |
| |
|
| | ```BibTeX |
| | @misc{, |
| | title={}, |
| | author={}, |
| | howpublished={\url{}}, |
| | year={} |
| | } |