File size: 3,339 Bytes
dca9f27 016e7c5 dca9f27 016e7c5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
---
license: apache-2.0
task_categories:
- robotics
- video-generation
- world-models
tags:
- cosmos
- nvidia
- robot-learning
- manipulation
- multi-view
size_categories:
- n<1K
---
# Cosmos 2.5 Multi-View Robot Manipulation Dataset
This dataset contains multi-view robot manipulation demonstrations formatted for training with NVIDIA Cosmos 2.5 world models.
## Dataset Description
- **Total Episodes**: 150
- **Files per Episode**: 4 (3 videos + 1 caption file)
- **Total Files**: 600
- **Dataset Size**: ~1.1 GB
- **Video Format**: MP4
- **Caption Format**: JSONL
## Dataset Structure
```
processeddata/
└── input/
├── episode_000001/
│ ├── caption.jsonl
│ ├── pinhole_base.mp4
│ ├── pinhole_side.mp4
│ └── pinhole_wrist.mp4
├── episode_000002/
│ ├── caption.jsonl
│ ├── pinhole_base.mp4
│ ├── pinhole_side.mp4
│ └── pinhole_wrist.mp4
...
└── episode_000150/
├── caption.jsonl
├── pinhole_base.mp4
├── pinhole_side.mp4
└── pinhole_wrist.mp4
```
## File Descriptions
### Video Files
Each episode contains three synchronized video views:
- **pinhole_base.mp4**: Base/overhead camera view
- **pinhole_side.mp4**: Side camera view
- **pinhole_wrist.mp4**: Wrist-mounted camera view
### Caption Files
Each `caption.jsonl` file contains three lines (one per view) with:
- `caption`: Natural language description of the task
- `view`: Camera view identifier
- `tag`: Additional metadata (nullable)
Example caption.jsonl:
```json
{"caption": "Pick up the bottle and place it into the blue box", "view": "pinhole_base", "tag": null}
{"caption": "Pick up the bottle and place it into the blue box", "view": "pinhole_wrist", "tag": null}
{"caption": "Pick up the bottle and place it into the blue box", "view": "pinhole_side", "tag": null}
```
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("JeffrinSam/cosmos2.5multip")
# Access episode data
episode_path = "processeddata/input/episode_000001"
```
### Using with Cosmos 2.5
This dataset is formatted for training world models with NVIDIA Cosmos 2.5. Each episode provides:
- Multi-view synchronized videos for spatial understanding
- Natural language task descriptions
- Structured format compatible with Cosmos data loaders
## Applications
- Robot manipulation learning
- Multi-view world model training
- Vision-language grounding for robotics
- Physical AI simulation
- Video prediction models
## Citation
If you use this dataset, please cite:
```bibtex
@misc{cosmos2.5multip,
title={Cosmos 2.5 Multi-View Robot Manipulation Dataset},
author={JeffrinSam},
year={2025},
publisher={HuggingFace},
howpublished={\url{https://huggingface.co/datasets/JeffrinSam/cosmos2.5multip}}
}
```
## License
This dataset is released under the Apache 2.0 License.
## Related Resources
- [NVIDIA Cosmos](https://www.nvidia.com/en-us/ai/cosmos/)
- [Physical AI Documentation](https://docs.nvidia.com/cosmos/)
- [LeRobot Framework](https://github.com/huggingface/lerobot)
|