|
|
---
|
|
|
license: cc-by-4.0
|
|
|
configs:
|
|
|
- config_name: metaworld
|
|
|
data_files:
|
|
|
- split: train
|
|
|
path: metaworld/hf_data/*
|
|
|
- config_name: robocasa
|
|
|
data_files:
|
|
|
- split: train
|
|
|
path: robocasa/hf_data/*
|
|
|
- config_name: franka_custom
|
|
|
data_files:
|
|
|
- split: train
|
|
|
path: franka_custom/hf_data/*
|
|
|
- config_name: pusht
|
|
|
data_files:
|
|
|
- split: train
|
|
|
path: pusht/hf_data/*
|
|
|
- config_name: wall
|
|
|
data_files:
|
|
|
- split: train
|
|
|
path: wall/hf_data/*
|
|
|
- config_name: point_maze
|
|
|
data_files:
|
|
|
- split: train
|
|
|
path: point_maze/hf_data/*
|
|
|
---
|
|
|
|
|
|
<h1 align="center">
|
|
|
<p>π <b>JEPA-WMs Datasets</b></p>
|
|
|
</h1>
|
|
|
|
|
|
<h2 align="center">
|
|
|
<p><i>Robotics trajectories for world model training π€</i></p>
|
|
|
</h2>
|
|
|
|
|
|
<div align="center" style="line-height: 1;">
|
|
|
<a href="https://github.com/facebookresearch/jepa-wms" target="_blank" style="margin: 2px;"><img alt="Github" src="https://img.shields.io/badge/Github-facebookresearch/jepa--wms-black?logo=github" style="display: inline-block; vertical-align: middle;"/></a>
|
|
|
<a href="https://huggingface.co/datasets/facebook/jepa-wms" target="_blank" style="margin: 2px;"><img alt="HuggingFace" src="https://img.shields.io/badge/π€%20HuggingFace-facebook/jepa--wms-ffc107" style="display: inline-block; vertical-align: middle;"/></a>
|
|
|
<a href="https://arxiv.org/abs/2512.24497" target="_blank" style="margin: 2px;"><img alt="ArXiv" src="https://img.shields.io/badge/arXiv-2512.24497-b5212f?logo=arxiv" style="display: inline-block; vertical-align: middle;"/></a>
|
|
|
</div>
|
|
|
|
|
|
<br>
|
|
|
|
|
|
<p align="center">
|
|
|
<b><a href="https://ai.facebook.com/research/">Meta AI Research, FAIR</a></b>
|
|
|
</p>
|
|
|
|
|
|
<p align="center">
|
|
|
This π€ HuggingFace repository hosts datasets for training <b>JEPA-WM</b> world models.<br>
|
|
|
π See the <a href="https://github.com/facebookresearch/jepa-wms">main repository</a> for training code and pretrained models.
|
|
|
</p>
|
|
|
|
|
|
> **ποΈ Preview Images:** To view example images in the Dataset Viewer above, select a dataset configuration (e.g., `metaworld`, `pusht`) and click **"Run query"**.
|
|
|
|
|
|
---
|
|
|
|
|
|
## π¦ Downloading Data
|
|
|
|
|
|
Use the download script from the [main repository](https://github.com/facebookresearch/jepa-wms):
|
|
|
|
|
|
```bash
|
|
|
# Download all datasets
|
|
|
python src/scripts/download_data.py
|
|
|
|
|
|
# Download specific dataset(s)
|
|
|
python src/scripts/download_data.py --dataset pusht pointmaze wall
|
|
|
|
|
|
# List available datasets
|
|
|
python src/scripts/download_data.py --list
|
|
|
```
|
|
|
|
|
|
---
|
|
|
|
|
|
## π Available Datasets
|
|
|
|
|
|
| Dataset | Description | Format |
|
|
|
|---------|-------------|--------|
|
|
|
| π **metaworld** | Tabletop manipulation (42 tasks) | `.mp4` + `.parquet` |
|
|
|
| π **robocasa** | Kitchen manipulation | `.hdf5` |
|
|
|
| π¦Ύ **franka_custom** | Real Franka robot (3 views) | `.h5` per episode |
|
|
|
| π΅ **pusht** | Push-T block pushing | `.zip` π¦ |
|
|
|
| πͺ **wall** | Point navigation through doors | `.zip` π¦ |
|
|
|
| π§© **point_maze** | Point navigation in mazes | `.zip` π¦ |
|
|
|
|
|
|
> π‘ The `pusht`, `wall`, and `point_maze` datasets are sourced from [DINO-WM](https://github.com/apple/ml-dino-wm) and re-hosted here for convenience.
|
|
|
|
|
|
---
|
|
|
|
|
|
<details>
|
|
|
<summary><b>π Dataset Details</b></summary>
|
|
|
|
|
|
### π Metaworld
|
|
|
|
|
|
Tabletop robotic manipulation across 42 different tasks.
|
|
|
|
|
|
| Field | Shape | Description |
|
|
|
|-------|-------|-------------|
|
|
|
| `observation` | 224Γ224 RGB | Rendered observation image |
|
|
|
| `state` | 39-dim | Full state vector |
|
|
|
| `action` | 4-dim | End-effector action |
|
|
|
| `reward` | scalar | Task reward |
|
|
|
| `task` | string | Task name (e.g., "drawer-open") |
|
|
|
|
|
|
### π RoboCasa
|
|
|
|
|
|
Kitchen manipulation with multiple camera views.
|
|
|
|
|
|
| Field | Shape | Description |
|
|
|
|-------|-------|-------------|
|
|
|
| `eye_in_hand` | 256Γ256 RGB | Eye-in-hand camera |
|
|
|
| `leftview` | 256Γ256 RGB | Left view camera |
|
|
|
| `action` | 12-dim | Robot action |
|
|
|
| `state_*` | various | State observations |
|
|
|
|
|
|
### π¦Ύ Franka Custom
|
|
|
|
|
|
Real Franka robot with 3 camera views.
|
|
|
|
|
|
| Field | Shape | Description |
|
|
|
|-------|-------|-------------|
|
|
|
| `exterior_image_1_left` | 480Γ640 RGB | Exterior camera 1 |
|
|
|
| `exterior_image_2_left` | 480Γ640 RGB | Exterior camera 2 |
|
|
|
| `wrist_image_left` | 480Γ640 RGB | Wrist-mounted camera |
|
|
|
| `cartesian_position` | 6-dim | End-effector pose |
|
|
|
| `joint_position` | 7-dim | Joint angles |
|
|
|
| `gripper_position` | scalar | Gripper state |
|
|
|
|
|
|
### π΅ Push-T
|
|
|
|
|
|
Block pushing task from the Push-T benchmark.
|
|
|
|
|
|
| Field | Shape | Description |
|
|
|
|-------|-------|-------------|
|
|
|
| `observation` | 224Γ224 RGB | Rendered observation |
|
|
|
| `state` | 5-dim | Block + agent state |
|
|
|
| `action` | 2-dim | Relative position action |
|
|
|
| `velocity` | 2-dim | Agent velocity |
|
|
|
|
|
|
### πͺ Wall
|
|
|
|
|
|
Point navigation through walls with doors.
|
|
|
|
|
|
| Field | Shape | Description |
|
|
|
|-------|-------|-------------|
|
|
|
| `observation` | 224Γ224 RGB | Rendered observation |
|
|
|
| `state` | 2-dim | Position (x, y) |
|
|
|
| `action` | 2-dim | Movement action |
|
|
|
| `door_location` | scalar | Door y-position |
|
|
|
| `wall_location` | scalar | Wall x-position |
|
|
|
|
|
|
### π§© Point Maze
|
|
|
|
|
|
Point navigation in procedural mazes.
|
|
|
|
|
|
| Field | Shape | Description |
|
|
|
|-------|-------|-------------|
|
|
|
| `observation` | 224Γ224 RGB | Rendered observation |
|
|
|
| `state` | 4-dim | Position + velocity |
|
|
|
| `action` | 2-dim | Movement action |
|
|
|
|
|
|
</details>
|
|
|
|
|
|
---
|
|
|
|
|
|
<details>
|
|
|
<summary><b>π Repository Structure</b></summary>
|
|
|
|
|
|
```
|
|
|
.
|
|
|
βββ π README.md
|
|
|
βββ π pyproject.toml
|
|
|
βββ π scripts/ # π οΈ Utility scripts
|
|
|
β βββ convert_to_hf.py # Convert raw β parquet
|
|
|
β βββ visualize.py # Visualize converted data
|
|
|
β βββ upload_to_hf.py # Upload to HuggingFace
|
|
|
βββ π metaworld/
|
|
|
β βββ hf_data/ # Example parquet (for dataset viewer)
|
|
|
β βββ data/ # Raw parquet files
|
|
|
βββ π robocasa/
|
|
|
β βββ hf_data/ # Example parquet (for dataset viewer)
|
|
|
β βββ combine_all_im256.hdf5 # Raw HDF5
|
|
|
βββ π franka_custom/
|
|
|
β βββ hf_data/ # Example parquet (for dataset viewer)
|
|
|
β βββ data/ # Raw H5 files (per episode)
|
|
|
βββ π pusht/
|
|
|
β βββ hf_data/ # Example parquet (for dataset viewer)
|
|
|
β βββ pusht_noise.zip # Raw data (zipped)
|
|
|
βββ π wall/
|
|
|
β βββ hf_data/ # Example parquet (for dataset viewer)
|
|
|
β βββ wall_single.zip # Raw data (zipped)
|
|
|
βββ π point_maze/
|
|
|
βββ hf_data/ # Example parquet (for dataset viewer)
|
|
|
βββ point_maze.zip # Raw data (zipped)
|
|
|
```
|
|
|
|
|
|
</details>
|
|
|
|
|
|
---
|
|
|
|
|
|
<details>
|
|
|
<summary><b>π οΈ Development Scripts</b></summary>
|
|
|
|
|
|
These scripts are for dataset maintainers and developers.
|
|
|
|
|
|
### π Convert Raw Data to Parquet
|
|
|
|
|
|
```bash
|
|
|
# Analyze dataset structure
|
|
|
python scripts/convert_to_hf.py --dataset metaworld --analyze
|
|
|
|
|
|
# Convert episode 0 (default)
|
|
|
python scripts/convert_to_hf.py --dataset metaworld --convert
|
|
|
python scripts/convert_to_hf.py --dataset pusht --convert
|
|
|
python scripts/convert_to_hf.py --dataset wall --convert
|
|
|
python scripts/convert_to_hf.py --dataset point_maze --convert
|
|
|
python scripts/convert_to_hf.py --dataset robocasa --convert
|
|
|
python scripts/convert_to_hf.py --dataset franka_custom --convert
|
|
|
|
|
|
# Convert specific episode with options
|
|
|
python scripts/convert_to_hf.py --dataset wall --convert --episode 5 --max-frames 50
|
|
|
```
|
|
|
|
|
|
### π Visualize Converted Data
|
|
|
|
|
|
```bash
|
|
|
# Display frames in matplotlib window
|
|
|
python scripts/visualize.py --dataset metaworld
|
|
|
python scripts/visualize.py --dataset pusht
|
|
|
|
|
|
# Save visualization to file
|
|
|
python scripts/visualize.py --dataset point_maze --num-frames 12 --save output.png
|
|
|
|
|
|
# Print dataset info only
|
|
|
python scripts/visualize.py --dataset robocasa --info-only
|
|
|
```
|
|
|
|
|
|
### βοΈ Upload to HuggingFace
|
|
|
|
|
|
```bash
|
|
|
# Upload a single file
|
|
|
python scripts/upload_to_hf.py --file robocasa/hf_data/train-00000-of-00001.parquet
|
|
|
|
|
|
# Upload an entire folder
|
|
|
python scripts/upload_to_hf.py --folder franka_custom --message "Add franka_custom data"
|
|
|
|
|
|
# Upload all parquet files
|
|
|
python scripts/upload_to_hf.py --all
|
|
|
```
|
|
|
|
|
|
</details>
|
|
|
|
|
|
---
|
|
|
|
|
|
## π License
|
|
|
|
|
|
This dataset is released under the [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/).
|
|
|
|
|
|
---
|
|
|
|
|
|
## π Citation
|
|
|
|
|
|
If you find these datasets useful, please consider giving a β and citing:
|
|
|
|
|
|
```bibtex
|
|
|
@misc{terver2025drivessuccessphysicalplanning,
|
|
|
title={What Drives Success in Physical Planning with Joint-Embedding Predictive World Models?},
|
|
|
author={Basile Terver and Tsung-Yen Yang and Jean Ponce and Adrien Bardes and Yann LeCun},
|
|
|
year={2025},
|
|
|
eprint={2512.24497},
|
|
|
archivePrefix={arXiv},
|
|
|
primaryClass={cs.AI},
|
|
|
url={https://arxiv.org/abs/2512.24497},
|
|
|
}
|
|
|
```
|
|
|
|
|
|
---
|
|
|
|
|
|
<p align="center">
|
|
|
Made with β€οΈ by <a href="https://ai.facebook.com/research/">Meta AI Research, FAIR</a>
|
|
|
</p>
|
|
|
|