Datasets:

Formats:
parquet
Size:
< 1K
ArXiv:
License:
jepa-wms / README.md
Basile-Terv's picture
Update arxiv ref
6116f04
---
license: cc-by-4.0
configs:
- config_name: metaworld
data_files:
- split: train
path: metaworld/hf_data/*
- config_name: robocasa
data_files:
- split: train
path: robocasa/hf_data/*
- config_name: franka_custom
data_files:
- split: train
path: franka_custom/hf_data/*
- config_name: pusht
data_files:
- split: train
path: pusht/hf_data/*
- config_name: wall
data_files:
- split: train
path: wall/hf_data/*
- config_name: point_maze
data_files:
- split: train
path: point_maze/hf_data/*
---
<h1 align="center">
<p>🌍 <b>JEPA-WMs Datasets</b></p>
</h1>
<h2 align="center">
<p><i>Robotics trajectories for world model training πŸ€–</i></p>
</h2>
<div align="center" style="line-height: 1;">
<a href="https://github.com/facebookresearch/jepa-wms" target="_blank" style="margin: 2px;"><img alt="Github" src="https://img.shields.io/badge/Github-facebookresearch/jepa--wms-black?logo=github" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://huggingface.co/datasets/facebook/jepa-wms" target="_blank" style="margin: 2px;"><img alt="HuggingFace" src="https://img.shields.io/badge/πŸ€—%20HuggingFace-facebook/jepa--wms-ffc107" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://arxiv.org/abs/2512.24497" target="_blank" style="margin: 2px;"><img alt="ArXiv" src="https://img.shields.io/badge/arXiv-2512.24497-b5212f?logo=arxiv" style="display: inline-block; vertical-align: middle;"/></a>
</div>
<br>
<p align="center">
<b><a href="https://ai.facebook.com/research/">Meta AI Research, FAIR</a></b>
</p>
<p align="center">
This πŸ€— HuggingFace repository hosts datasets for training <b>JEPA-WM</b> world models.<br>
πŸ‘‰ See the <a href="https://github.com/facebookresearch/jepa-wms">main repository</a> for training code and pretrained models.
</p>
> **πŸ‘οΈ Preview Images:** To view example images in the Dataset Viewer above, select a dataset configuration (e.g., `metaworld`, `pusht`) and click **"Run query"**.
---
## πŸ“¦ Downloading Data
Use the download script from the [main repository](https://github.com/facebookresearch/jepa-wms):
```bash
# Download all datasets
python src/scripts/download_data.py
# Download specific dataset(s)
python src/scripts/download_data.py --dataset pusht pointmaze wall
# List available datasets
python src/scripts/download_data.py --list
```
---
## πŸ“‹ Available Datasets
| Dataset | Description | Format |
|---------|-------------|--------|
| 🏭 **metaworld** | Tabletop manipulation (42 tasks) | `.mp4` + `.parquet` |
| 🏠 **robocasa** | Kitchen manipulation | `.hdf5` |
| 🦾 **franka_custom** | Real Franka robot (3 views) | `.h5` per episode |
| πŸ”΅ **pusht** | Push-T block pushing | `.zip` πŸ“¦ |
| πŸšͺ **wall** | Point navigation through doors | `.zip` πŸ“¦ |
| 🧩 **point_maze** | Point navigation in mazes | `.zip` πŸ“¦ |
> πŸ’‘ The `pusht`, `wall`, and `point_maze` datasets are sourced from [DINO-WM](https://github.com/apple/ml-dino-wm) and re-hosted here for convenience.
---
<details>
<summary><b>πŸ“š Dataset Details</b></summary>
### 🏭 Metaworld
Tabletop robotic manipulation across 42 different tasks.
| Field | Shape | Description |
|-------|-------|-------------|
| `observation` | 224Γ—224 RGB | Rendered observation image |
| `state` | 39-dim | Full state vector |
| `action` | 4-dim | End-effector action |
| `reward` | scalar | Task reward |
| `task` | string | Task name (e.g., "drawer-open") |
### 🏠 RoboCasa
Kitchen manipulation with multiple camera views.
| Field | Shape | Description |
|-------|-------|-------------|
| `eye_in_hand` | 256Γ—256 RGB | Eye-in-hand camera |
| `leftview` | 256Γ—256 RGB | Left view camera |
| `action` | 12-dim | Robot action |
| `state_*` | various | State observations |
### 🦾 Franka Custom
Real Franka robot with 3 camera views.
| Field | Shape | Description |
|-------|-------|-------------|
| `exterior_image_1_left` | 480Γ—640 RGB | Exterior camera 1 |
| `exterior_image_2_left` | 480Γ—640 RGB | Exterior camera 2 |
| `wrist_image_left` | 480Γ—640 RGB | Wrist-mounted camera |
| `cartesian_position` | 6-dim | End-effector pose |
| `joint_position` | 7-dim | Joint angles |
| `gripper_position` | scalar | Gripper state |
### πŸ”΅ Push-T
Block pushing task from the Push-T benchmark.
| Field | Shape | Description |
|-------|-------|-------------|
| `observation` | 224Γ—224 RGB | Rendered observation |
| `state` | 5-dim | Block + agent state |
| `action` | 2-dim | Relative position action |
| `velocity` | 2-dim | Agent velocity |
### πŸšͺ Wall
Point navigation through walls with doors.
| Field | Shape | Description |
|-------|-------|-------------|
| `observation` | 224Γ—224 RGB | Rendered observation |
| `state` | 2-dim | Position (x, y) |
| `action` | 2-dim | Movement action |
| `door_location` | scalar | Door y-position |
| `wall_location` | scalar | Wall x-position |
### 🧩 Point Maze
Point navigation in procedural mazes.
| Field | Shape | Description |
|-------|-------|-------------|
| `observation` | 224Γ—224 RGB | Rendered observation |
| `state` | 4-dim | Position + velocity |
| `action` | 2-dim | Movement action |
</details>
---
<details>
<summary><b>πŸ“ Repository Structure</b></summary>
```
.
β”œβ”€β”€ πŸ“„ README.md
β”œβ”€β”€ πŸ“„ pyproject.toml
β”œβ”€β”€ πŸ“‚ scripts/ # πŸ› οΈ Utility scripts
β”‚ β”œβ”€β”€ convert_to_hf.py # Convert raw β†’ parquet
β”‚ β”œβ”€β”€ visualize.py # Visualize converted data
β”‚ └── upload_to_hf.py # Upload to HuggingFace
β”œβ”€β”€ πŸ“‚ metaworld/
β”‚ β”œβ”€β”€ hf_data/ # Example parquet (for dataset viewer)
β”‚ └── data/ # Raw parquet files
β”œβ”€β”€ πŸ“‚ robocasa/
β”‚ β”œβ”€β”€ hf_data/ # Example parquet (for dataset viewer)
β”‚ └── combine_all_im256.hdf5 # Raw HDF5
β”œβ”€β”€ πŸ“‚ franka_custom/
β”‚ β”œβ”€β”€ hf_data/ # Example parquet (for dataset viewer)
β”‚ └── data/ # Raw H5 files (per episode)
β”œβ”€β”€ πŸ“‚ pusht/
β”‚ β”œβ”€β”€ hf_data/ # Example parquet (for dataset viewer)
β”‚ └── pusht_noise.zip # Raw data (zipped)
β”œβ”€β”€ πŸ“‚ wall/
β”‚ β”œβ”€β”€ hf_data/ # Example parquet (for dataset viewer)
β”‚ └── wall_single.zip # Raw data (zipped)
└── πŸ“‚ point_maze/
β”œβ”€β”€ hf_data/ # Example parquet (for dataset viewer)
└── point_maze.zip # Raw data (zipped)
```
</details>
---
<details>
<summary><b>πŸ› οΈ Development Scripts</b></summary>
These scripts are for dataset maintainers and developers.
### πŸ”„ Convert Raw Data to Parquet
```bash
# Analyze dataset structure
python scripts/convert_to_hf.py --dataset metaworld --analyze
# Convert episode 0 (default)
python scripts/convert_to_hf.py --dataset metaworld --convert
python scripts/convert_to_hf.py --dataset pusht --convert
python scripts/convert_to_hf.py --dataset wall --convert
python scripts/convert_to_hf.py --dataset point_maze --convert
python scripts/convert_to_hf.py --dataset robocasa --convert
python scripts/convert_to_hf.py --dataset franka_custom --convert
# Convert specific episode with options
python scripts/convert_to_hf.py --dataset wall --convert --episode 5 --max-frames 50
```
### πŸ‘€ Visualize Converted Data
```bash
# Display frames in matplotlib window
python scripts/visualize.py --dataset metaworld
python scripts/visualize.py --dataset pusht
# Save visualization to file
python scripts/visualize.py --dataset point_maze --num-frames 12 --save output.png
# Print dataset info only
python scripts/visualize.py --dataset robocasa --info-only
```
### ☁️ Upload to HuggingFace
```bash
# Upload a single file
python scripts/upload_to_hf.py --file robocasa/hf_data/train-00000-of-00001.parquet
# Upload an entire folder
python scripts/upload_to_hf.py --folder franka_custom --message "Add franka_custom data"
# Upload all parquet files
python scripts/upload_to_hf.py --all
```
</details>
---
## πŸ“„ License
This dataset is released under the [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/).
---
## πŸ“š Citation
If you find these datasets useful, please consider giving a ⭐ and citing:
```bibtex
@misc{terver2025drivessuccessphysicalplanning,
title={What Drives Success in Physical Planning with Joint-Embedding Predictive World Models?},
author={Basile Terver and Tsung-Yen Yang and Jean Ponce and Adrien Bardes and Yann LeCun},
year={2025},
eprint={2512.24497},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2512.24497},
}
```
---
<p align="center">
Made with ❀️ by <a href="https://ai.facebook.com/research/">Meta AI Research, FAIR</a>
</p>