Enhance ViPRA dataset card with links, abstract, usage, and task categories
Browse filesThis PR significantly improves the dataset card for ViPRA by:
- Adding `task_categories: ['robotics', 'video-text-to-text']` to the metadata for better discoverability.
- Including a visual teaser image.
- Providing direct links to the Hugging Face paper, the project page, and the GitHub repository.
- Incorporating the paper's full abstract and an overview of the ViPRA project.
- Detailing the contents and purpose of the `cotrain-dynamics14` and `cotrain-vqgan-vision-cache` datasets.
- Describing the required dataset structures for various data sources (SSv2, OpenX, LIBERO) with example configurations.
- Adding a "Sample JSONL Entry" to illustrate the dataset's data format.
- Including a "Sample Usage" section with `huggingface-cli` commands for downloading the datasets and a Python code snippet demonstrating the `ViPRAClient` for robot action inference, all sourced directly from the GitHub README.
- Adding the BibTeX citation for proper attribution.
These additions make the dataset card much more comprehensive, informative, and useful for researchers looking to understand and utilize the ViPRA datasets.
|
@@ -1,9 +1,223 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
| 3 |
tags:
|
| 4 |
- robotics
|
| 5 |
- latent-actions
|
| 6 |
- vipra
|
| 7 |
-
|
| 8 |
-
-
|
| 9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
size_categories:
|
| 4 |
+
- 10M<n<100M
|
| 5 |
tags:
|
| 6 |
- robotics
|
| 7 |
- latent-actions
|
| 8 |
- vipra
|
| 9 |
+
task_categories:
|
| 10 |
+
- robotics
|
| 11 |
+
- video-text-to-text
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# ViPRA: Video Prediction for Robot Actions Datasets
|
| 15 |
+
|
| 16 |
+
<div align="center">
|
| 17 |
+
<picture>
|
| 18 |
+
<!-- Optional: light/dark variants -->
|
| 19 |
+
<img src="https://github.com/sroutray/vipra/assets/teaser_vipra.png" alt="ViPRA teaser" style="max-width: 100%; height: auto;">
|
| 20 |
+
</picture>
|
| 21 |
+
|
| 22 |
+
<p>
|
| 23 |
+
<a href="https://huggingface.co/papers/2511.07732">
|
| 24 |
+
<img src="https://img.shields.io/badge/Paper-2511.07732-b31b1b.svg" alt="Paper">
|
| 25 |
+
</a>
|
| 26 |
+
<a href="https://vipra-project.github.io">
|
| 27 |
+
<img src="https://img.shields.io/badge/Project-Page-green.svg" alt="Project Page">
|
| 28 |
+
</a>
|
| 29 |
+
<a href="https://github.com/sroutray/vipra">
|
| 30 |
+
<img src="https://img.shields.io/badge/Code-GitHub-blue.svg" alt="Code">
|
| 31 |
+
</a>
|
| 32 |
+
</p>
|
| 33 |
+
</div>
|
| 34 |
+
|
| 35 |
+
ViPRA (Video Prediction for Robot Actions) is a framework that learns continuous robot control from actionless videos. It employs a video-language model to predict future visual observations and motion-centric latent actions, which are intermediate representations of scene dynamics. These latent actions are trained to reflect physically grounded behavior, and for downstream control, a chunked flow matching decoder maps them to robot-specific continuous action sequences.
|
| 36 |
+
|
| 37 |
+
## Abstract
|
| 38 |
+
|
| 39 |
+
Can we turn a video prediction model into a robot policy? Videos, including those of humans or teleoperated robots, capture rich physical interactions. However, most of them lack labeled actions, which limits their use in robot learning. We present Video Prediction for Robot Actions (ViPRA), a simple pretraining-finetuning framework that learns continuous robot control from these actionless videos. Instead of directly predicting actions, we train a video-language model to predict both future visual observations and motion-centric latent actions, which serve as intermediate representations of scene dynamics. We train these latent actions using perceptual losses and optical flow consistency to ensure they reflect physically grounded behavior. For downstream control, we introduce a chunked flow matching decoder that maps latent actions to robot-specific continuous action sequences, using only 100 to 200 teleoperated demonstrations. This approach avoids expensive action annotation, supports generalization across embodiments, and enables smooth, high-frequency continuous control upto 22 Hz via chunked action decoding. Unlike prior latent action works that treat pretraining as autoregressive policy learning, explicitly models both what changes and how. Our method outperforms strong baselines, with a 16% gain on the SIMPLER benchmark and a 13% improvement across real world manipulation tasks. We will release models and code at this https URL
|
| 40 |
+
|
| 41 |
+
## Overview
|
| 42 |
+
- A recipe to learn generalist robot policies from large-scale human and robot videos without action labels.
|
| 43 |
+
- A novel approach to extract motion-centric latent actions that capture fine-grained physical dynamics.
|
| 44 |
+
- A flow matching action decoder with action chunking for high-frequency continuous control.
|
| 45 |
+
- Outperforms prior latent action methods and VLA baselines trained on ground-truth actions.
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
## Datasets
|
| 50 |
+
|
| 51 |
+
This repository hosts two primary datasets for ViPRA: `cotrain-dynamics14` and `cotrain-vqgan-vision-cache`.
|
| 52 |
+
|
| 53 |
+
### `cotrain-dynamics14` (ViPRA Policy Pretraining Data)
|
| 54 |
+
|
| 55 |
+
This is a pre-tokenized, horizon-14 dynamics dataset used for ViPRA policy pretraining. It merges multiple robot datasets (LIBERO, BridgeData V2, Fractal, Kuka) with human video data from SSv2. Each training sample includes:
|
| 56 |
+
|
| 57 |
+
* history frames
|
| 58 |
+
* latent state target
|
| 59 |
+
* latent action tokens from LAQ
|
| 60 |
+
* natural language task text
|
| 61 |
+
|
| 62 |
+
This dataset is already chunked into 14-step latent action sequences.
|
| 63 |
+
|
| 64 |
+
### `cotrain-vqgan-vision-cache` (Optional, speeds up training)
|
| 65 |
+
|
| 66 |
+
This optional dataset contains precomputed VQGAN token sequences for each frame. It can be used to speed up training by avoiding the need to repeatedly tokenize raw pixels during the training process. If you don't use the cache, ViPRA can tokenize frames on the fly by setting `vqgan_path` to the VQ-GAN weights from `LWM-Chat-1M-Jax`.
|
| 67 |
+
|
| 68 |
+
## Dataset Structure Requirements
|
| 69 |
+
|
| 70 |
+
You can match these layouts or extend `laq/model/data.py` (in the [code repository](https://github.com/sroutray/vipra)) to support your own.
|
| 71 |
+
|
| 72 |
+
#### Something-Something-v2 (SSv2)
|
| 73 |
+
|
| 74 |
+
```text
|
| 75 |
+
ssv2/
|
| 76 |
+
βββ labels/
|
| 77 |
+
β βββ train.json
|
| 78 |
+
β βββ validation.json
|
| 79 |
+
β βββ test.json
|
| 80 |
+
βββ 20bn-something-something-v2/
|
| 81 |
+
β βββ [video_id].webm
|
| 82 |
+
β βββ ...
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
Example config:
|
| 86 |
+
|
| 87 |
+
```python
|
| 88 |
+
ssv2 = dict(
|
| 89 |
+
root_dir=Path("/path/to/ssv2"),
|
| 90 |
+
split="trainval", # "train", "val", "trainval", "test", "all"
|
| 91 |
+
stepsize=2, # frame sampling stride
|
| 92 |
+
)
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
#### OpenX Datasets (Fractal, Bridge, Kuka)
|
| 96 |
+
|
| 97 |
+
```text
|
| 98 |
+
dataset_name/
|
| 99 |
+
βββ processed/
|
| 100 |
+
β βββ trajectory_001/
|
| 101 |
+
β β βββ images/
|
| 102 |
+
β β βββ 000000.jpg
|
| 103 |
+
β β βββ 000001.jpg
|
| 104 |
+
β β βββ ...
|
| 105 |
+
β βββ trajectory_002/
|
| 106 |
+
β βββ ...
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
+
Example config:
|
| 110 |
+
|
| 111 |
+
```python
|
| 112 |
+
bridge = dict(
|
| 113 |
+
root_dir=Path("/path/to/bridge"),
|
| 114 |
+
split="trainval",
|
| 115 |
+
num_trajs=dict(trainval=25460, val=2546),
|
| 116 |
+
stepsize=1,
|
| 117 |
+
)
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
+
#### LIBERO
|
| 121 |
+
|
| 122 |
+
```text
|
| 123 |
+
LIBERO/
|
| 124 |
+
βββ libero_10_modified/
|
| 125 |
+
β βββ images/trajectory_001/000000.jpg
|
| 126 |
+
βββ libero_goal_modified/
|
| 127 |
+
β βββ images/...
|
| 128 |
+
βββ libero_object_modified/
|
| 129 |
+
β βββ images/...
|
| 130 |
+
βββ libero_spatial_modified/
|
| 131 |
+
βββ images/...
|
| 132 |
+
```
|
| 133 |
+
|
| 134 |
+
Example config:
|
| 135 |
+
|
| 136 |
+
```python
|
| 137 |
+
libero = dict(
|
| 138 |
+
root_dir=Path("/path/to/LIBERO"),
|
| 139 |
+
split="trainval",
|
| 140 |
+
num_trajs=dict(trainval=1.0, val=0.1), # float = percentage
|
| 141 |
+
stepsize=1,
|
| 142 |
+
)
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
## Sample JSONL Entry
|
| 146 |
+
|
| 147 |
+
Each line in the `cotrain-dynamics14` dataset is a JSONL entry representing a training sample with latent actions. Here's an example:
|
| 148 |
+
|
| 149 |
+
```json
|
| 150 |
+
{
|
| 151 |
+
"instruction": "pick up the red block and place it in the blue bowl",
|
| 152 |
+
"raw_action": [0.1, -0.2, 0.05, 0.0, 0.0, 0.0, 1.0],
|
| 153 |
+
"image": ["libero_10_modified/images/traj_001/step0000.jpg", "libero_10_modified/images/traj_001/step0001.jpg"],
|
| 154 |
+
"latent_state": ["libero_10_modified/images/traj_001/step0015.jpg"],
|
| 155 |
+
"latent_action_idxs": [3, 7, 1, 4, 2, 6, 0, 5, 1, 3, 7, 2, 4, 0, 6, 1],
|
| 156 |
+
"fields_la": "[instruction],[vision],latent_action",
|
| 157 |
+
"fields_ls": "[instruction],[vision],latent_state",
|
| 158 |
+
"fields_ls_la": "[instruction],[vision],latent_state,latent_action"
|
| 159 |
+
}
|
| 160 |
+
```
|
| 161 |
+
|
| 162 |
+
## Sample Usage
|
| 163 |
+
|
| 164 |
+
### Downloading Pretraining Data
|
| 165 |
+
|
| 166 |
+
You can download the `cotrain-dynamics14` dataset and the optional `cotrain-vqgan-vision-cache` using the Hugging Face CLI:
|
| 167 |
+
|
| 168 |
+
```bash
|
| 169 |
+
# Download the cotrain-dynamics14 dataset
|
| 170 |
+
mkdir cotrain_data
|
| 171 |
+
huggingface-cli download vipra-project/cotrain-dynamics14 --local-dir cotrain_data/
|
| 172 |
+
|
| 173 |
+
# Download the cotrain-vqgan-vision-cache (optional, speeds up training)
|
| 174 |
+
mkdir vision_cache
|
| 175 |
+
huggingface-cli download vipra-project/cotrain-vqgan-vision-cache --local-dir vision_cache/
|
| 176 |
+
```
|
| 177 |
+
|
| 178 |
+
### Using the ViPRA Client for Action Inference
|
| 179 |
+
|
| 180 |
+
The `ViPRAClient` class (available in the [code repository](https://github.com/sroutray/vipra) under `vipra/inference/dynamics_action_cont_client.py`) provides a simple interface to communicate with an inference server and obtain robot actions. This demonstrates how the dataset's visual inputs and task descriptions are used for inference.
|
| 181 |
+
|
| 182 |
+
```python
|
| 183 |
+
from inference.dynamics_action_cont_client import ViPRAClient
|
| 184 |
+
import numpy as np
|
| 185 |
+
|
| 186 |
+
client = ViPRAClient(
|
| 187 |
+
server_url="http://localhost:8005",
|
| 188 |
+
timeout=(1.0, 5.0),
|
| 189 |
+
image_size=256
|
| 190 |
+
)
|
| 191 |
+
|
| 192 |
+
task_description = "pick up the red block and place it in the blue bowl"
|
| 193 |
+
client.reset_policy(task_description)
|
| 194 |
+
|
| 195 |
+
image1 = np.random.randint(0, 255, (256, 256, 3), dtype=np.uint8)
|
| 196 |
+
image2 = np.random.randint(0, 255, (256, 256, 3), dtype=np.uint8)
|
| 197 |
+
|
| 198 |
+
# Two request modes available:
|
| 199 |
+
actions = client.get_action([image1, image2], mode="json") # JSON mode (baseline)
|
| 200 |
+
actions = client.get_action([image1, image2], mode="bytes") # JPEG mode (faster)
|
| 201 |
+
```
|
| 202 |
+
|
| 203 |
+
---
|
| 204 |
+
|
| 205 |
+
## Citation
|
| 206 |
+
|
| 207 |
+
If you find our code or models useful in your work, please cite:
|
| 208 |
+
|
| 209 |
+
```bibtex
|
| 210 |
+
@inproceedings{routray2025vipra,
|
| 211 |
+
title = {ViPRA: Video Prediction for Robot Actions},
|
| 212 |
+
author = {Routray, Sandeep and Pan, Hengkai and Jain, Unnat and Bahl, Shikhar and Pathak, Deepak},
|
| 213 |
+
booktitle = {NeurIPS 2025 Workshop on Embodied World Models for Decision Making},
|
| 214 |
+
year = {2025},
|
| 215 |
+
month = dec
|
| 216 |
+
}
|
| 217 |
+
```
|
| 218 |
+
|
| 219 |
+
---
|
| 220 |
+
|
| 221 |
+
## License
|
| 222 |
+
|
| 223 |
+
ViPRAβs code and model weights are released under the [Apache License 2.0](https://github.com/sroutray/vipra/blob/main/LICENSE).
|