Table30 / README.md
RoboChallengeAI's picture
Upload folder using huggingface_hub
9b1d7fb verified
|
raw
history blame
4.32 kB

RoboChallenge Dataset

Dataset Structure

Available Tasks

The dataset includes 30 diverse manipulation tasks (Table30):

  • arrange_flowers
  • arrange_fruits_in_basket
  • arrange_paper_cups
  • clean_dining_table
  • fold_dishcloth
  • hang_toothbrush_cup
  • make_vegetarian_sandwich
  • move_objects_into_box
  • open_the_drawer
  • place_shoes_on_rack
  • plug_in_network_cable
  • pour_fries_into_plate
  • press_three_buttons
  • put_cup_on_coaster
  • put_opener_in_drawer
  • put_pen_into_pencil_case
  • scan_QR_code
  • search_green_boxes
  • set_the_plates
  • shred_scrap_paper
  • sort_books
  • sort_electronic_products
  • stack_bowls
  • stack_color_blocks
  • stick_tape_to_box
  • sweep_the_rubbish
  • turn_on_faucet
  • turn_on_light_switch
  • water_potted_plant
  • wipe_the_table

Hierarchy

The dataset is organized by tasks, with each task containing multiple demonstration episodes:

.
├── <task_name>/                    # e.g., arrange_flowers, fold_dishcloth
│   ├── task_desc.json              # Task description
│   ├── meta/                       # Task-level metadata
│   │   ├── task_info.json         
│   └── data/                       # Episode data
│       ├── episode_000000/         # Individual episode
│       │   ├── meta/
│       │   │   └── episode_meta.json    # Episode metadata
│       │   ├── states/
│       │   │   └── states.jsonl         # Robot states
│       │   └── videos/
│       │       ├── arm_realsense_rgb.mp4      # Arm-mounted camera
│       │       ├── global_realsense_rgb.mp4   # Global view camera
│       │       └── right_realsense_rgb.mp4    # Right-side camera (BEV)
│       ├── episode_000001/
│       └── ...
├── convert_to_lerobot.py           # Conversion script
└── README.md

JSON File Format

task_info.json

{
    "robot_id": "arx5_1",                    // Robot model identifier
    "task_desc": {
        "task_name": "arrange_flowers",      // Task identifier
        "prompt": "insert the three flowers on the table into the vase one by one",
        "scoring": "...",                    // Scoring criteria
        "task_tag": [                        // Task characteristics
            "repeated",
            "single-arm", 
            "ARX5",
            "precise3d"
        ]
    },
    "video_info": {
        "fps": 30,                           // Video frame rate
        "ext": "mp4",                        // Video format
        "encoding": {
            "vcodec": "libx264",             // Video codec
            "pix_fmt": "yuv420p"             // Pixel format
        }
    }
}

episode_meta.json

{
    "episode_index": 0,                      // Episode number
    "start_time": 1750405586.3430033,       // Unix timestamp (start)
    "end_time": 1750405642.5247612,         // Unix timestamp (end)
    "frames": 1672                          // Total video frames
}

Convert to Lerobot

While you can implement a custom Dataset class to read RoboChallenge data directly, we strongly recommend converting to LeRobot format to take advantage of LeRobot's comprehensive data processing and loading utilities. The convert_to_lerobot.py script we provided generates a ready-to-use LeRobot dataset repository from RoboChallenge dataset.

Prerequisites

  • Python 3.9+ with the following packages:
    • lerobot
    • opencv-python
    • numpy
  • Configure $LEROBOT_HOME (defaults to ~/.lerobot if unset).
pip install lerobot opencv-python numpy
export LEROBOT_HOME="/path/to/lerobot_home"

Usage

Run the converter from the repository root (or provide an absolute path):

python convert_to_lerobot.py \
  --repo-name example_repo \
  --raw-dataset /path/to/example_dataset \
  --frame-interval 1 

Output

  • Frames and metadata are saved to $LEROBOT_HOME/.
  • At the end, the script calls dataset.consolidate(run_compute_stats=False). If you require aggregated statistics, run it with run_compute_stats=True or execute a separate stats job.