cosmos2.5multip / README.md
JeffrinSam
Add comprehensive dataset README with usage examples and documentation
016e7c5
---
license: apache-2.0
task_categories:
- robotics
- video-generation
- world-models
tags:
- cosmos
- nvidia
- robot-learning
- manipulation
- multi-view
size_categories:
- n<1K
---
# Cosmos 2.5 Multi-View Robot Manipulation Dataset
This dataset contains multi-view robot manipulation demonstrations formatted for training with NVIDIA Cosmos 2.5 world models.
## Dataset Description
- **Total Episodes**: 150
- **Files per Episode**: 4 (3 videos + 1 caption file)
- **Total Files**: 600
- **Dataset Size**: ~1.1 GB
- **Video Format**: MP4
- **Caption Format**: JSONL
## Dataset Structure
```
processeddata/
└── input/
├── episode_000001/
│ ├── caption.jsonl
│ ├── pinhole_base.mp4
│ ├── pinhole_side.mp4
│ └── pinhole_wrist.mp4
├── episode_000002/
│ ├── caption.jsonl
│ ├── pinhole_base.mp4
│ ├── pinhole_side.mp4
│ └── pinhole_wrist.mp4
...
└── episode_000150/
├── caption.jsonl
├── pinhole_base.mp4
├── pinhole_side.mp4
└── pinhole_wrist.mp4
```
## File Descriptions
### Video Files
Each episode contains three synchronized video views:
- **pinhole_base.mp4**: Base/overhead camera view
- **pinhole_side.mp4**: Side camera view
- **pinhole_wrist.mp4**: Wrist-mounted camera view
### Caption Files
Each `caption.jsonl` file contains three lines (one per view) with:
- `caption`: Natural language description of the task
- `view`: Camera view identifier
- `tag`: Additional metadata (nullable)
Example caption.jsonl:
```json
{"caption": "Pick up the bottle and place it into the blue box", "view": "pinhole_base", "tag": null}
{"caption": "Pick up the bottle and place it into the blue box", "view": "pinhole_wrist", "tag": null}
{"caption": "Pick up the bottle and place it into the blue box", "view": "pinhole_side", "tag": null}
```
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("JeffrinSam/cosmos2.5multip")
# Access episode data
episode_path = "processeddata/input/episode_000001"
```
### Using with Cosmos 2.5
This dataset is formatted for training world models with NVIDIA Cosmos 2.5. Each episode provides:
- Multi-view synchronized videos for spatial understanding
- Natural language task descriptions
- Structured format compatible with Cosmos data loaders
## Applications
- Robot manipulation learning
- Multi-view world model training
- Vision-language grounding for robotics
- Physical AI simulation
- Video prediction models
## Citation
If you use this dataset, please cite:
```bibtex
@misc{cosmos2.5multip,
title={Cosmos 2.5 Multi-View Robot Manipulation Dataset},
author={JeffrinSam},
year={2025},
publisher={HuggingFace},
howpublished={\url{https://huggingface.co/datasets/JeffrinSam/cosmos2.5multip}}
}
```
## License
This dataset is released under the Apache 2.0 License.
## Related Resources
- [NVIDIA Cosmos](https://www.nvidia.com/en-us/ai/cosmos/)
- [Physical AI Documentation](https://docs.nvidia.com/cosmos/)
- [LeRobot Framework](https://github.com/huggingface/lerobot)