Datasets:

ArXiv:
Libraries:
Datasets
License:
harrim-nv's picture
Update README.md
67d01be verified
---
license: cc
---
# RoboCasa-Cosmos-Policy
## Dataset Description
RoboCasa-Cosmos-Policy is a modified version of the [RoboCasa simulation benchmark dataset](https://github.com/robocasa/robocasa), created as part of the Cosmos Policy project. This is the dataset used to train the [Cosmos-Policy-RoboCasa-Predict2-2B](https://huggingface.co/nvidia/Cosmos-Policy-RoboCasa-Predict2-2B) checkpoint.
### Key Modifications
Our modifications include the following:
1. **Higher-resolution images**: Images are saved at 224×224 pixels (vs. 128×128 in the original).
2. **No-op actions filtering**: Transitions with "no-op" (zero) actions that don't change the robot's state are filtered out.
3. **Success trimming**: Episodes are terminated early when success is detected, removing unnecessary trailing actions.
4. **JPEG compression**: Images in the full rollouts set (`all_episodes/` described below) are JPEG-compressed to reduce storage requirements. However, the successes-only set (`success_only/` described below) contains raw images that are not compressed (though they can be compressed post-hoc if desired).
5. **Deterministic regeneration**: All demonstrations are replayed in the simulation environment with deterministic seeding for reproducibility.
### Dataset Structure
The dataset is organized into two main directories:
- **`success_only/`**: Contains only successful demonstration episodes (filtered version). These are demonstrations that succeeded when replayed in the simulation environments. This set is used to train Cosmos Policy to generate high-quality actions.
- **`all_episodes/`**: Contains all episodes, including both successful and failed demonstrations. This set is used to train Cosmos Policy's world model and value function.
Each directory contains data from 24 kitchen manipulation tasks organized into 7 categories:
- `kitchen_coffee/` - Coffee machine tasks
- `kitchen_doors/` - Cabinet/door manipulation tasks
- `kitchen_drawer/` - Drawer manipulation tasks
- `kitchen_microwave/` - Microwave tasks
- `kitchen_pnp/` - Pick-and-place tasks
- `kitchen_sink/` - Sink-related tasks
- `kitchen_stove/` - Stove manipulation tasks
### Data Format
Each HDF5 file in `success_only/` contains:
```
data/
├── demo_0/
│ ├── obs/
│ │ ├── robot0_agentview_left_rgb # Left third-person camera images
│ │ ├── robot0_agentview_right_rgb # Right third-person camera images
│ │ ├── robot0_eye_in_hand_rgb # Wrist camera images
│ │ ├── gripper_states # Gripper joint positions
│ │ ├── joint_states # Robot joint positions
│ │ ├── ee_states # End-effector states (position + orientation)
│ │ ├── ee_pos # End-effector position
│ │ └── ee_ori # End-effector orientation
│ ├── actions # Action sequence
│ ├── states # Environment states
│ ├── robot_states # Combined robot state (gripper + EEF pos + EEF quat)
│ ├── rewards # Sparse rewards (0 until final timestep)
│ └── dones # Episode termination flags
│ └── task_description (attribute) # Natural language task description
├── demo_1/
...
```
The `all_episodes/` directory contains rollout data in a different format. Each episode is stored as a separate HDF5 file with the naming pattern:
```
episode_data--task={task_name}--{timestamp}--ep={episode_num}--success={True/False}--regen_demo.hdf5
```
Each of these HDF5 files contains:
```
# Datasets (arrays)
primary_images_jpeg # Left third-person camera images (JPEG compressed), shape: (T, H, W, 3)
secondary_images_jpeg # Right third-person camera images (JPEG compressed), shape: (T, H, W, 3)
wrist_images_jpeg # Wrist camera images (JPEG compressed), shape: (T, H, W, 3)
proprio # Proprioceptive state (gripper + EEF pos + quat), shape: (T, 9)
actions # Action sequence, shape: (T, 7)
# Attributes (scalars/metadata)
success # Boolean: True if episode succeeded, False otherwise
task_description # String: Natural language task description
```
### Statistics
- **Total tasks**: 24 kitchen manipulation tasks
- **Demonstrations per task**: ~50 human teleoperation demonstrations per task before filtering
- **Success rate**: ~80-90% (varies by task)
- **Image resolution**: 224×224×3 (RGB)
- **Action dimensions**: 7 (6-DoF end-effector control + 1 gripper)
- **Proprioception dimensions**: 9 (2 gripper joints + 3 EEF position + 4 EEF quaternion)
### Original RoboCasa Dataset
This dataset is derived from the original RoboCasa benchmark:
- **Paper**: [RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots](https://arxiv.org/abs/2406.02523)
- **Repository**: https://github.com/robocasa/robocasa
- **License**: CC BY 4.0
### Citation
If you use this dataset, please cite both the original RoboCasa paper and the Cosmos Policy paper.
<!-- ```bibtex
@article{nasiriany2024robocasa,
title={Robocasa: Large-scale simulation of everyday tasks for generalist robots},
author={Nasiriany, Soroush and Maddukuri, Abhiram and Zhang, Lance and Parikh, Adeet and Lo, Aaron and Joshi, Abhishek and Mandlekar, Ajay and Zhu, Yuke},
journal={arXiv preprint arXiv:2406.02523},
year={2024}
}
# TODO: Add Cosmos Policy BibTeX
``` -->
### License
Creative Commons Attribution 4.0 International (CC BY 4.0)