| license: cc-by-4.0 | |
| task_categories: | |
| - robotics | |
| tags: | |
| - robot | |
| - ogbench | |
| - rl | |
| - imitation | |
| - learning | |
| - simulation | |
| - manipulation | |
| # OGBench Data for Latent Particle World Models (LPWM) | |
| This repository contains pre-processed 64x64 frames for the `scene` and `cube` tasks from the [OGBench benchmark](https://github.com/seohongpark/ogbench). The dataset includes actions and frames used for training and evaluating **Latent Particle World Models (LPWM)**. | |
| LPWM is a self-supervised object-centric world model that autonomously discovers keypoints, bounding boxes, and object masks directly from video data. It is designed to scale to real-world multi-object datasets and is applicable in decision-making tasks such as goal-conditioned imitation learning. | |
| - **Paper:** [Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling](https://huggingface.co/papers/2603.04553) | |
| - **Project Page:** [https://taldatech.github.io/lpwm-web](https://taldatech.github.io/lpwm-web) | |
| - **GitHub Repository:** [https://github.com/taldatech/lpwm](https://github.com/taldatech/lpwm) | |
| ## Citation | |
| If you use this data or the LPWM model in your research, please cite the following paper: | |
| ```bibtex | |
| @inproceedings{ | |
| daniel2026latent, | |
| title={Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling}, | |
| author={Tal Daniel and Carl Qi and Dan Haramati and Amir Zadeh and Chuan Li and Aviv Tamar and Deepak Pathak and David Held}, | |
| booktitle={The Fourteenth International Conference on Learning Representations}, | |
| year={2026}, | |
| url={https://openreview.net/forum?id=lTaPtGiUUc} | |
| } | |
| ``` |