Datasets:

Tasks:
Other
Modalities:
Video
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
File size: 2,506 Bytes
487c907
 
 
7eea8b5
 
 
487c907
 
eb79d7f
487c907
 
7eea8b5
487c907
 
 
 
7eea8b5
487c907
eb79d7f
 
487c907
 
 
 
 
 
9356790
 
7eea8b5
9356790
487c907
 
 
 
7eea8b5
 
 
487c907
7eea8b5
487c907
 
 
 
 
7eea8b5
487c907
 
 
 
 
 
49a80d1
 
 
 
 
487c907
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
language:
- en
license: cc-by-nc-sa-4.0
task_categories:
- other
---

# OWM Benchmark

[![Project Page](https://img.shields.io/badge/Project-Page-blue)](https://compvis.github.io/myriad)
[![Paper](https://img.shields.io/badge/arXiv-paper-b31b1b)](https://huggingface.co/papers/2604.09527)
[![MYRIAD Weights](https://img.shields.io/badge/HuggingFace-Weights-orange)](https://huggingface.co/CompVis/myriad)

## Abstract

The OWM benchmark was proposed in the paper [Envisioning the Future, One Step at a Time](https://huggingface.co/papers/2604.09527) and used to evaluate the [MYRIAD](https://huggingface.co/CompVis/myriad/) model.

OWM is a benchmark of 95 curated videos with motion annotations, with the distribution of motion constrained to enable the evaluation of probabilistic motion prediction methods.
Videos are obtained from Pexels ([Pexels License](https://www.pexels.com/license/)). We manually annotate relevant objects and the type of motion observed. We use an off-the-shelf tracker to obtain motion trajectories and manually verify correctness.

## Project Page and Code

- **Project Page**: https://compvis.github.io/myriad
- **GitHub Repository**: https://github.com/CompVis/flow-poke-transformer

![OWM samples](https://compvis.github.io/myriad/static/images/paper-svg/owm-qualitative.svg)

*OWM samples include complex real-world scenes with different motion types and complexities.*

## Usage

We provide code to run the OWM evaluation in our [GitHub repository](https://github.com/CompVis/flow-poke-transformer).

To run the evaluation, first download the data by running `hf download CompVis/owm-95 --repo-type dataset`.

Then run the evaluation script via:
```shell
python -m scripts.myriad_eval.openset_prediction --data-root path/to/data --ckpt-path path/to/checkpoint --dataset-name owm
```

## License

- Videos are sourced from Pexels and thus licensed under the [Pexels License](https://www.pexels.com/license/)
- Metadata and motion annotations are provided under the [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license

## Citation

If you find our data or code useful, please cite our paper:

```bibtex
@inproceedings{baumann2026envisioning,
  title={Envisioning the Future, One Step at a Time},
  author={Baumann, Stefan Andreas and Wiese, Jannik and Martorella, Tommaso and Kalayeh, Mahdi M. and Ommer, Bjorn},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2026}
}
```