Kai0-Data / README.md
OpenDriveLab-org's picture
Update README.md
58f01c0 verified
metadata
license: cc-by-nc-nd-4.0
task_categories:
  - robotics
tags:
  - LeRobot
configs:
  - config_name: default
    data_files: data/*/*.parquet
language:
  - en

⚠️ !!! 等待信息,填充链接

Contents

About the Dataset

  • This dataset was created using LeRobot
  • ~200 hours real world scenarios across 1 main task, 3 sub tasks
  • A clothing organization task that involves identifying the type of clothing and determining the next action based on its category
  • sub-tasks
    • Folding
      • Randomly pick a piece of clothing from the basket and place it on the workbench
      • If it is a short T-shirt, fold it
    • Hanging Preparation
      • Randomly pick a piece of clothing from the basket and place it on the workbench
      • If it is a dress shirt, locate the collar and drag the clothing to the right side
    • Hanging
      • Hang the dress shirt properly

Dataset Structure

Folder hierarchy

dataset_root/
  ├── data/
  │  ├── chunk-000/
  │  │   ├──   episode_000000.parquet
  │  │   ├──   episode_000001.parquet
  │  │   └──  ...
  │  └── ...
  ├── videos/
  │   ├── chunk-000/
  │   │   ├── observation.images.hand_left
  │   │   │   ├── episode_000000.mp4
  │   │   │   ├── episode_000001.mp4
  │   │   │   └──  ...
  │   │   ├── observation.images.hand_right
  │   │   │   ├── episode_000000.mp4
  │   │   │   ├── episode_000001.mp4
  │   │   │   └──  ...
  │   │   ├── observation.images.top_head
  │   │   │   ├── episode_000000.mp4
  │   │   │   ├── episode_000001.mp4
  │   │   │   └──  ...
  │   │   └──  ...  
  ├── meta/
  │   ├── info.json
  │   ├── episodes.jsonl
  │   ├── tasks.jsonl
  │   └── episodes_stats.jsonl
  └ README.md

Details

info.json

the basic struct of the info.json

{
    "codebase_version": "v2.1",
    "robot_type": "agilex",
    "total_episodes": ...,
    "total_frames": ...,
    "total_tasks": ...,
    "total_videos": ...,
    "total_chunks": ...,
    "chunks_size": ...,
    "fps": ...,
    "splits": {
        "train": ...
    },
    "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
    "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
    "features": {
        "observation.images.top_head": {
            "dtype": "video",
            "shape": [
                480,
                640,
                3
            ],
            "names": [
                "height",
                "width",
                "channel"
            ],
            "info": {
                "video.height": 480,
                "video.width": 640,
                "video.codec": "av1",
                "video.pix_fmt": "yuv420p",
                "video.is_depth_map": false,
                "video.fps": 30,
                "video.channels": 3,
                "has_audio": false
            }
        },
        "observation.images.hand_left": {
            ...
        },
        "observation.images.hand_right": {
            ...
        },
        "observation.state": {
            "dtype": "float32",
            "shape": [
                14
            ],
            "names": null
        },
        "action": {
            "dtype": "float32",
            "shape": [
                14
            ],
            "names": null
        },
        "timestamp": {
            "dtype": "float32",
            "shape": [
                1
            ],
            "names": null
        },
        "frame_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "episode_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "task_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        }
    }

Parquet file format

Field Name shape Meaning
observation.state [N, 14] left [:, :6], right [:, 7:13], joint angle
left[:, 6], right [:, 13] , gripper open range
action [N, 14] left [:, :6], right [:, 7:13], joint angle
left[:, 6], right [:, 13] , gripper open range
timestamp [N, 1] Time elapsed since the start of the episode (in seconds)
frame_index [N, 1] Index of this frame within the current episode (0-indexed)
episode_index [N, 1] Index of the episode this frame belongs to
index [N, 1] Global unique index across all frames in the dataset
task_index [N, 1] Index identifying the task type being performed

tasks.jsonl

positive/negitive: Labels indicating the advantage of each frame's action, where "positive" means the action benefits future outcomes and "negative" means otherwise.

Download the Dataset

Python Script

from huggingface_hub import hf_hub_download, snapshot_download
from datasets import load_dataset

# Download a single file
hf_hub_download(
    repo_id="OpenDriveLab-org/kai0", 
    filename="episodes.jsonl",
    subfolder="meta",
    repo_type="dataset",
    local_dir="where/you/want/to/save"
)

# Download a specific folder
snapshot_download(
    repo_id="OpenDriveLab-org/kai0", 
    local_dir="/where/you/want/to/save",
    repo_type="dataset",
    allow_patterns=["data/*"]
)

# Load the entire dataset
dataset = load_dataset("OpenDriveLab-org/kai0") 

Terminal (CLI)

# Download a single file
hf download OpenDriveLab-org/kai0 \
    --include "meta/info.json" \
    --repo-type dataset \
    --local-dir "/where/you/want/to/save"

# Download a specific folder
hf download OpenDriveLab-org/kai0 \
    --repo-type dataset \
    --include "meta/*" \
    --local-dir "/where/you/want/to/save"

# Download the entire dataset
hf download OpenDriveLab-org/kai0 \
    --repo-type dataset \
    --local-dir "/where/you/want/to/save"

Load the dataset

For LeRobot version < 0.4.0

Choose the appropriate import based on your version:

Version Import Path
<= 0.1.0 from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
> 0.1.0 and < 0.4.0 from lerobot.datasets.lerobot_dataset import LeRobotDataset
# For version <= 0.1.0
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

# For version > 0.1.0 and < 0.4.0
from lerobot.datasets.lerobot_dataset import LeRobotDataset

# Load the dataset
dataset = LeRobotDataset(repo_id='where/the/dataset/you/stored')

For LeRobot version >= 0.4.0

You need to migrate the dataset from v2.1 to v3.0 first. See the official documentation: Migrate the dataset from v2.1 to v3.0

python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=<HF_USER/DATASET_ID>

⚠️ !!! 等待信息填充

License and Citation

All the data and code within this repo are under . Please consider citing our project if it helps your research.

@misc{,
  title={},
  author={},
  howpublished={\url{}},
  year={}
}