Datasets:

Tasks:
Other
Modalities:
Video
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
owm-95 / README.md
nielsr's picture
nielsr HF Staff
Update dataset card metadata, paper link and usage
7eea8b5 verified
|
raw
history blame
2.51 kB
metadata
language:
  - en
license: cc-by-nc-sa-4.0
task_categories:
  - other

OWM Benchmark

Project Page Paper MYRIAD Weights

Abstract

The OWM benchmark was proposed in the paper Envisioning the Future, One Step at a Time and used to evaluate the MYRIAD model.

OWM is a benchmark of 95 curated videos with motion annotations, with the distribution of motion constrained to enable the evaluation of probabilistic motion prediction methods. Videos are obtained from Pexels (Pexels License). We manually annotate relevant objects and the type of motion observed. We use an off-the-shelf tracker to obtain motion trajectories and manually verify correctness.

Project Page and Code

OWM samples

OWM samples include complex real-world scenes with different motion types and complexities.

Usage

We provide code to run the OWM evaluation in our GitHub repository.

To run the evaluation, first download the data by running hf download CompVis/owm-95 --repo-type dataset.

Then run the evaluation script via:

python -m scripts.myriad_eval.openset_prediction --data-root path/to/data --ckpt-path path/to/checkpoint --dataset-name owm

License

  • Videos are sourced from Pexels and thus licensed under the Pexels License
  • Metadata and motion annotations are provided under the CC-BY-NC-SA-4.0 license

Citation

If you find our data or code useful, please cite our paper:

@inproceedings{baumann2026envisioning,
  title={Envisioning the Future, One Step at a Time},
  author={Baumann, Stefan Andreas and Wiese, Jannik and Martorella, Tommaso and Kalayeh, Mahdi M. and Ommer, Bjorn},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2026}
}