--- language: - en license: cc-by-nc-sa-4.0 task_categories: - other --- # OWM Benchmark [![Project Page](https://img.shields.io/badge/Project-Page-blue)](https://compvis.github.io/myriad) [![Paper](https://img.shields.io/badge/arXiv-paper-b31b1b)](https://huggingface.co/papers/2604.09527) [![MYRIAD Weights](https://img.shields.io/badge/HuggingFace-Weights-orange)](https://huggingface.co/CompVis/myriad) ## Abstract The OWM benchmark was proposed in the paper [Envisioning the Future, One Step at a Time](https://huggingface.co/papers/2604.09527) and used to evaluate the [MYRIAD](https://huggingface.co/CompVis/myriad/) model. OWM is a benchmark of 95 curated videos with motion annotations, with the distribution of motion constrained to enable the evaluation of probabilistic motion prediction methods. Videos are obtained from Pexels ([Pexels License](https://www.pexels.com/license/)). We manually annotate relevant objects and the type of motion observed. We use an off-the-shelf tracker to obtain motion trajectories and manually verify correctness. ## Project Page and Code - **Project Page**: https://compvis.github.io/myriad - **GitHub Repository**: https://github.com/CompVis/flow-poke-transformer ![OWM samples](https://compvis.github.io/myriad/static/images/paper-svg/owm-qualitative.svg) *OWM samples include complex real-world scenes with different motion types and complexities.* ## Usage We provide code to run the OWM evaluation in our [GitHub repository](https://github.com/CompVis/flow-poke-transformer). To run the evaluation, first download the data by running `hf download CompVis/owm-95 --repo-type dataset`. Then run the evaluation script via: ```shell python -m scripts.myriad_eval.openset_prediction --data-root path/to/data --ckpt-path path/to/checkpoint --dataset-name owm ``` ## License - Videos are sourced from Pexels and thus licensed under the [Pexels License](https://www.pexels.com/license/) - Metadata and motion annotations are provided under the [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license ## Citation If you find our data or code useful, please cite our paper: ```bibtex @inproceedings{baumann2026envisioning, title={Envisioning the Future, One Step at a Time}, author={Baumann, Stefan Andreas and Wiese, Jannik and Martorella, Tommaso and Kalayeh, Mahdi M. and Ommer, Bjorn}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2026} } ```