File size: 2,865 Bytes
487c907 0f6e473 487c907 eb79d7f 487c907 df6db3b 487c907 df6db3b 487c907 0f6e473 487c907 eb79d7f 487c907 9356790 0f6e473 9356790 487c907 0f6e473 487c907 0f6e473 487c907 0f6e473 487c907 49a80d1 487c907 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | ---
language:
- en
license: cc-by-nc-sa-4.0
task_categories:
- other
---
# OWM Benchmark
[](https://compvis.github.io/myriad)
[](https://arxiv.org/abs/2604.09527)
[](https://huggingface.co/papers/2604.09527)
[](https://github.com/CompVis/flow-poke-transformer)
[](https://huggingface.co/CompVis/myriad)
[](https://huggingface.co/datasets/CompVis/myriad-physics)
## Abstract
The OWM benchmark was proposed in the paper [Envisioning the Future, One Step at a Time](https://huggingface.co/papers/2604.09527) and used to evaluate the [MYRIAD](https://huggingface.co/CompVis/myriad/) model.
OWM is a benchmark of 95 curated videos with motion annotations, with the distribution of motion constrained to enable the evaluation of probabilistic motion prediction methods.
Videos are obtained from Pexels ([Pexels License](https://www.pexels.com/license/)). We manually annotate relevant objects and the type of motion observed. We use an off-the-shelf tracker to obtain motion trajectories and manually verify correctness.
## Project Page and Code
- **Project Page**: https://compvis.github.io/myriad
- **GitHub Repository**: https://github.com/CompVis/flow-poke-transformer

*OWM samples include complex real-world scenes with different motion types and complexities.*
## Usage
We provide code to run the OWM evaluation in our [GitHub repository](https://github.com/CompVis/flow-poke-transformer).
To run the evaluation, first download the data by running `hf download CompVis/owm-95 --repo-type dataset`.
Then run the evaluation script via:
```shell
python -m scripts.myriad_eval.openset_prediction --data-root path/to/data --ckpt-path path/to/checkpoint --dataset-name owm
```
## License
- Videos are sourced from Pexels and thus licensed under the [Pexels License](https://www.pexels.com/license/)
- Metadata and motion annotations are provided under the [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license
## Citation
If you find our data or code useful, please cite our paper:
```bibtex
@inproceedings{baumann2026envisioning,
title={Envisioning the Future, One Step at a Time},
author={Baumann, Stefan Andreas and Wiese, Jannik and Martorella, Tommaso and Kalayeh, Mahdi M. and Ommer, Bjorn},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2026}
}
``` |