The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
with h5py.File(first_file, "r") as h5:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 102, in h5py.h5f.open
FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/Rooholla/MiM-PushT@849d8f766392d44b0937bfcf780f1d18fd835804/play/shard_0000.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
MiM Flexiv PushT Dataset
A real-robot dataset collected on a Flexiv Rizon 10S consisting of ~4 hours of (1) random play interactions and (2) push-to-goal episodes, with RGB observations at 5 Hz and 2D delta-position actions.
What’s inside
Tasks / subsets
This repository contains two related subsets:
Play (random interaction):
- Unstructured episodes with no explicit start/end goals.
- Intended for representation learning, world modeling, and offline RL from diverse behavior.
Push-to-goal (task):
- Each episode pushes a T-shaped object toward the center of the board.
- Intended for imitation learning, offline RL, and goal-conditioned learning (if you add goals).
Observation and action spaces
Observation (
images)- RGB image at 5 Hz
- Stored as
uint8in the range[0, 255] - Resolution:
256 x 256 x 3 - Tensor shape per episode in HDF5:
1 x seq_len x 256 x 256 x 3
Action (
actions)- 2D delta position command in the board/camera plane:
(delta_x, delta_y) - Stored as
float32 - Tensor shape per episode in HDF5:
1 x seq_len x 2
- 2D delta position command in the board/camera plane:
Dataset scale (approx.)
- Total recording time: ~4 hours
- Frame rate: 5 Hz
- Approx. total frames: ~72,000 (4 * 3600 * 5)
Note: Exact counts per split are provided in the metadata JSON shipped with each subset.
Data format and files
File layout
.
├── README.md
├── assets/
│ └── preview.gif
├── play/
│ ├── metadata.json
│ ├── shard_000.hdf5
│ ├── shard_001.hdf5
│ └── ...
└── task/
├── metadata.json
├── shard_000.hdf5
├── shard_001.hdf5
└── ...
HDF5 schema
Each HDF5 file contains two main datasets:
images: shape1 x seq_len x 256 x 256 x 3, dtypeuint8actions: shape1 x seq_len x 2, dtypefloat32
Metadata JSON schema
Each subset includes a metadata.json describing global stats and shapes. Example:
{
"num_shards": 13,
"episodes_per_shard": 1,
"total_episodes": 13,
"image_shape":,[256,256,3]
"action_shape":,[2]
"min_action": ,[min_1, min_2]
"max_action": ,[max_1, max_2]
}
How to use
Quickstart (Python)
Below is a minimal example for loading one episode with h5py:
import h5py
import numpy as np
path = "play/shard_000.hdf5" # or "task/shard_000.hdf5"
with h5py.File(path, "r") as f:
images = f["images"] # (seq_len, 256, 256, 3), uint8
actions = f["actions"] # (seq_len, 2), float32
# Example: normalize images to float[4]
images_f = images.astype(np.float32) / 255.0
DreamerV4 world model (simulator)
A DreamerV4 world model has been trained on this dataset to enable “simulated” policy training/evaluation inside a learned dynamics model.
World model link: Here
Notes:
The world model is provided for research convenience and may not perfectly capture real-world contacts/friction.
Please report issues / inconsistencies via GitHub issues or the Hugging Face discussion tab.
Data collection details
Platform: Flexiv Rizon 10S (real robot) in-lab setup
Camera: Orbec Femto Bolt fixed view looking at the board with the T object
Control/action logged: 2D delta position (x, y) per timestep
Observation logged: RGB image at 5 Hz
Play subset: random exploration / interaction
Task subset: pushing T to board center
Limitations
Actions are 2D deltas; they do not include full robot state, forces, or contact measurements.
Camera viewpoint and lab conditions may limit generalization.
License
This dataset is released under AGPL-3.0.
Citation
If you use this dataset, please cite:
text
@dataset{mim_pusht_2025,
title = {MiM Flexiv PushT Dataset},
author = {Rooholla Khorrambakht},
year = {2025},
url = {https://huggingface.co/datasets/Rooholla/MiM-PushT}
}
- Downloads last month
- 27