Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
                  with h5py.File(first_file, "r") as h5:
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
                  fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
                  fid = h5f.open(name, flags, fapl=fapl)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
                File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
                File "h5py/h5f.pyx", line 102, in h5py.h5f.open
              FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/rhos-ai/gm100-xtrainer@95e1a0ee3698d7451a082c00c841c41bdb1c4b3f/task_00001/train/episode_init_0.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for GM-100 Xtrainer Part (HDF5 Format)

Project Page Paper

This is a part of the Great March 100 (GM-100) Project. Raw teleoperation data collected using the Dobot Xtrainer robot. This dataset is stored in HDF5 format.

Dataset Description

This repository contains raw robotic manipulation data stored in HDF5 (.hdf5) format. It is designed for imitation learning tasks using the Dobot Xtrainer hardware stack.

  • Robot Platform: Dobot Xtrainer
  • Data Format: HDF5
  • Camera Views: Top, Left Wrist, Right Wrist

File Structure

The dataset consists of individual HDF5 files, where each file represents one trajectory (episode):

.
β”œβ”€β”€ task_00001/                  # Task 1 Directory
β”‚   β”œβ”€β”€ train/                   # Training episodes for Task 1
β”‚   β”‚   β”œβ”€β”€ episode_init_0.hdf5
β”‚   β”‚   β”œβ”€β”€ episode_init_1.hdf5
β”‚   β”‚   └── ...
β”‚   └── eval/                    # Evaluation/Test episodes for Task 1
β”‚       β”œβ”€β”€ episode_init_0.hdf5
β”‚       └── ...
β”œβ”€β”€ task_00002/                  # Task 2 Directory
β”‚   β”œβ”€β”€ ...
└── ...

HDF5 Internal Structure

Each .hdf5 file contains the following groups and datasets. The sequence length is denoted by T.

Root Group

Key Shape Dtype Description
/action (T, 14) float Target joint positions (the command sent to the robot)

Observations Group (/observations)

Key Shape Dtype Description
/observations/qpos (T, 14) float Actual joint positions (state)

Images Group (/observations/images)

Images are stored under /observations/images/.

Note: Images are stored as compressed binary strings (e.g., JPEG buffers) to save space. They must be decoded before use.

Key Shape Type Description
top (T, 480, 640, 3) uint8 (binary string) Top static camera RGB
left_wrist (T, 480, 640, 3) uint8 (binary string) Left wrist camera RGB
right_wrist (T, 480, 640, 3) uint8 (binary string) Right wrist camera RGB

Hardware Specification

Joint Order (14-DOF)

The state and action vectors correspond to the following motors in order:

  1. right_waist
  2. right_shoulder
  3. right_elbow
  4. right_forearm_roll
  5. right_wrist_angle
  6. right_wrist_rotate
  7. right_gripper
  8. left_waist
  9. left_shoulder
  10. left_elbow
  11. left_forearm_roll
  12. left_wrist_angle
  13. left_wrist_rotate
  14. left_gripper

Usage Example

You can read the data using Python's h5py library.

import h5py
import numpy as np
import cv2
import os

# Example path (adjust as needed)
file_path = "task_00001/train/episode_init_0.hdf5"

if not os.path.exists(file_path):
    print(f"Please check path: {file_path}")
    exit()

with h5py.File(file_path, 'r') as f:
    # 1. Load low-dim data
    qpos = f['/observations/qpos'][:]
    action = f['/action'][:]
    
    print(f"Loaded Episode with {qpos.shape[0]} frames.")
    
    # 2. Load and Decode Images
    # Access the specific camera dataset
    camera_data = f['/observations/images/top']
    
    # Read the first frame's compressed data
    compressed_data = camera_data[0] 
    
    # Decode using OpenCV
    # np.frombuffer converts the binary string to a 1D uint8 array
    image = cv2.imdecode(np.frombuffer(compressed_data, np.uint8), cv2.IMREAD_COLOR)
    
    print(f"Decoded Image Shape: {image.shape}") # Should be (480, 640, 3)

Citation

@misc{wang2026greatmarch100100,
      title={The Great March 100: 100 Detail-oriented Tasks for Evaluating Embodied AI Agents}, 
      author={Ziyu Wang and Chenyuan Liu and Yushun Xiang and Runhao Zhang and Yu Zhang and Qingbo Hao and Hongliang Lu and Houyu Chen and Zhizhong Feng and Kaiyue Zheng and Dehao Ye and Xianchao Zeng and Xinyu Zhou and Boran Wen and Jiaxin Li and Mingyu Zhang and Kecheng Zheng and Qian Zhu and Ran Cheng and Yong-Lu Li},
      year={2026},
      eprint={2601.11421},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2601.11421}, 
}
Downloads last month
520

Paper for rhos-ai/gm100-xtrainer