File size: 1,450 Bytes
1ba3459
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: cc-by-sa-4.0
task_categories:
- robotics
---

# Latent Particle World Models (LPWM)

[Project Website](https://taldatech.github.io/lpwm-web) | [Paper](https://huggingface.co/papers/2603.04553) | [GitHub](https://github.com/taldatech/lpwm)

Latent Particle World Model (LPWM) is a self-supervised object-centric world model scaled to real-world multi-object datasets and applicable in decision-making. LPWM autonomously discovers keypoints, bounding boxes, and object masks directly from video data, enabling it to learn rich scene decompositions without supervision. The architecture is trained end-to-end purely from videos and supports flexible conditioning on actions, language, and image goals.

## Sample Usage

To train LPWM on a dataset like Sketchy using the official implementation, you can use the following commands:

```bash
# Install environment
conda env create -f environment.yml
conda activate dlp

# Train LPWM on Sketchy
python train_lpwm.py --dataset sketchy
```

## Citation

```bibtex
@inproceedings{
    daniel2026latent,
    title={Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling},
    author={Tal Daniel and Carl Qi and Dan Haramati and Amir Zadeh and Chuan Li and Aviv Tamar and Deepak Pathak and David Held},
    booktitle={The Fourteenth International Conference on Learning Representations},
    year={2026},
    url={https://openreview.net/forum?id=lTaPtGiUUc}
}
```