Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
with h5py.File(first_file, "r") as h5:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 102, in h5py.h5f.open
FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/nvidia/RoboCasa-Cosmos-Policy@a0c39d24517e1ab97c6378df99a0a843e77f804d/all_episodes/CloseDoubleDoor/2024-04-29/episode_data--task=close_the_cabinet_doors--2025-10-02_15-20-39--ep=1--success=False--regen_demo.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
RoboCasa-Cosmos-Policy
Dataset Description
RoboCasa-Cosmos-Policy is a modified version of the RoboCasa simulation benchmark dataset, created as part of the Cosmos Policy project. This is the dataset used to train the Cosmos-Policy-RoboCasa-Predict2-2B checkpoint.
Key Modifications
Our modifications include the following:
- Higher-resolution images: Images are saved at 224×224 pixels (vs. 128×128 in the original).
- No-op actions filtering: Transitions with "no-op" (zero) actions that don't change the robot's state are filtered out.
- Success trimming: Episodes are terminated early when success is detected, removing unnecessary trailing actions.
- JPEG compression: Images in the full rollouts set (
all_episodes/described below) are JPEG-compressed to reduce storage requirements. However, the successes-only set (success_only/described below) contains raw images that are not compressed (though they can be compressed post-hoc if desired). - Deterministic regeneration: All demonstrations are replayed in the simulation environment with deterministic seeding for reproducibility.
Dataset Structure
The dataset is organized into two main directories:
success_only/: Contains only successful demonstration episodes (filtered version). These are demonstrations that succeeded when replayed in the simulation environments. This set is used to train Cosmos Policy to generate high-quality actions.all_episodes/: Contains all episodes, including both successful and failed demonstrations. This set is used to train Cosmos Policy's world model and value function.
Each directory contains data from 24 kitchen manipulation tasks organized into 7 categories:
kitchen_coffee/- Coffee machine taskskitchen_doors/- Cabinet/door manipulation taskskitchen_drawer/- Drawer manipulation taskskitchen_microwave/- Microwave taskskitchen_pnp/- Pick-and-place taskskitchen_sink/- Sink-related taskskitchen_stove/- Stove manipulation tasks
Data Format
Each HDF5 file in success_only/ contains:
data/
├── demo_0/
│ ├── obs/
│ │ ├── robot0_agentview_left_rgb # Left third-person camera images
│ │ ├── robot0_agentview_right_rgb # Right third-person camera images
│ │ ├── robot0_eye_in_hand_rgb # Wrist camera images
│ │ ├── gripper_states # Gripper joint positions
│ │ ├── joint_states # Robot joint positions
│ │ ├── ee_states # End-effector states (position + orientation)
│ │ ├── ee_pos # End-effector position
│ │ └── ee_ori # End-effector orientation
│ ├── actions # Action sequence
│ ├── states # Environment states
│ ├── robot_states # Combined robot state (gripper + EEF pos + EEF quat)
│ ├── rewards # Sparse rewards (0 until final timestep)
│ └── dones # Episode termination flags
│ └── task_description (attribute) # Natural language task description
├── demo_1/
...
The all_episodes/ directory contains rollout data in a different format. Each episode is stored as a separate HDF5 file with the naming pattern:
episode_data--task={task_name}--{timestamp}--ep={episode_num}--success={True/False}--regen_demo.hdf5
Each of these HDF5 files contains:
# Datasets (arrays)
primary_images_jpeg # Left third-person camera images (JPEG compressed), shape: (T, H, W, 3)
secondary_images_jpeg # Right third-person camera images (JPEG compressed), shape: (T, H, W, 3)
wrist_images_jpeg # Wrist camera images (JPEG compressed), shape: (T, H, W, 3)
proprio # Proprioceptive state (gripper + EEF pos + quat), shape: (T, 9)
actions # Action sequence, shape: (T, 7)
# Attributes (scalars/metadata)
success # Boolean: True if episode succeeded, False otherwise
task_description # String: Natural language task description
Statistics
- Total tasks: 24 kitchen manipulation tasks
- Demonstrations per task: ~50 human teleoperation demonstrations per task before filtering
- Success rate: ~80-90% (varies by task)
- Image resolution: 224×224×3 (RGB)
- Action dimensions: 7 (6-DoF end-effector control + 1 gripper)
- Proprioception dimensions: 9 (2 gripper joints + 3 EEF position + 4 EEF quaternion)
Original RoboCasa Dataset
This dataset is derived from the original RoboCasa benchmark:
- Paper: RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots
- Repository: https://github.com/robocasa/robocasa
- License: CC BY 4.0
Citation
If you use this dataset, please cite both the original RoboCasa paper and the Cosmos Policy paper.
License
Creative Commons Attribution 4.0 International (CC BY 4.0)
- Downloads last month
- 54