Datasets:
Add paper link, project page, and metadata to dataset card
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,3 +1,37 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-sa-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-sa-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- robotics
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# Latent Particle World Models (LPWM)
|
| 8 |
+
|
| 9 |
+
[Project Website](https://taldatech.github.io/lpwm-web) | [Paper](https://huggingface.co/papers/2603.04553) | [GitHub](https://github.com/taldatech/lpwm)
|
| 10 |
+
|
| 11 |
+
Latent Particle World Model (LPWM) is a self-supervised object-centric world model scaled to real-world multi-object datasets and applicable in decision-making. LPWM autonomously discovers keypoints, bounding boxes, and object masks directly from video data, enabling it to learn rich scene decompositions without supervision. The architecture is trained end-to-end purely from videos and supports flexible conditioning on actions, language, and image goals.
|
| 12 |
+
|
| 13 |
+
## Sample Usage
|
| 14 |
+
|
| 15 |
+
To train LPWM on a dataset like Sketchy using the official implementation, you can use the following commands:
|
| 16 |
+
|
| 17 |
+
```bash
|
| 18 |
+
# Install environment
|
| 19 |
+
conda env create -f environment.yml
|
| 20 |
+
conda activate dlp
|
| 21 |
+
|
| 22 |
+
# Train LPWM on Sketchy
|
| 23 |
+
python train_lpwm.py --dataset sketchy
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
## Citation
|
| 27 |
+
|
| 28 |
+
```bibtex
|
| 29 |
+
@inproceedings{
|
| 30 |
+
daniel2026latent,
|
| 31 |
+
title={Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling},
|
| 32 |
+
author={Tal Daniel and Carl Qi and Dan Haramati and Amir Zadeh and Chuan Li and Aviv Tamar and Deepak Pathak and David Held},
|
| 33 |
+
booktitle={The Fourteenth International Conference on Learning Representations},
|
| 34 |
+
year={2026},
|
| 35 |
+
url={https://openreview.net/forum?id=lTaPtGiUUc}
|
| 36 |
+
}
|
| 37 |
+
```
|