Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-sa-4.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# OWM-95 Benchmark
|
| 8 |
+
|
| 9 |
+
[](https://compvis.github.io/myriad)
|
| 10 |
+
[](_blank)
|
| 11 |
+
[](https://huggingface.co/CompVis/myriad)
|
| 12 |
+
|
| 13 |
+
## Abstract
|
| 14 |
+
|
| 15 |
+
The OWM-95 benchmark was proposed in the paper [Envisioning the Future, One Step at a Time](_blank) and used to evaluate the [MYRIAD](https://huggingface.co/CompVis/myriad/) model.
|
| 16 |
+
|
| 17 |
+
OWM-95 is a benchmark of 95 curated videos with motion annotations, where the distribution of motion is constrained to make evaluation of probabilistic motion prediction methods feasible.
|
| 18 |
+
Videos are obtained from Pexels ([Pexels License](https://www.pexels.com/license/)), we manually annotate relevant objects, and the type of motion observed. We use an off-the-shelf tracker to obtain motion trajectories and manually verify correctness.
|
| 19 |
+
|
| 20 |
+
## Project Page and Code
|
| 21 |
+
|
| 22 |
+
- **Project Page**: https://compvis.github.io/myriad
|
| 23 |
+
- **GitHub Repository**: https://github.com/CompVis/flow-poke-transformer
|
| 24 |
+
|
| 25 |
+
## Usage
|
| 26 |
+
|
| 27 |
+
We provide code to run the OWM evaluation in our [GitHub repository](https://github.com/CompVis/flow-poke-transformer).
|
| 28 |
+
|
| 29 |
+
To run the evaluation, first download the data by running `hf download CompVis/omw-95 --repo-type dataset` then run the evaluation script via
|
| 30 |
+
```shell
|
| 31 |
+
python -m scripts.eval.myriad_eval.owm --checkpoint-path path/to/checkpoint -data-path path/to/data
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
## License
|
| 35 |
+
|
| 36 |
+
- Videos are sourced from Pexels and thus licensed under the [Pexels License](https://www.pexels.com/license/)
|
| 37 |
+
- Metadata and motion annotations are provided under the [CC-BY-NC-SA-44.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license
|
| 38 |
+
|
| 39 |
+
## Citation
|
| 40 |
+
|
| 41 |
+
If you find our data or code useful, please cite our paper:
|
| 42 |
+
|
| 43 |
+
```bibtex
|
| 44 |
+
@inproceedings{baumann2025whatif,
|
| 45 |
+
title={What If: Understanding Motion Through Sparse Interactions},
|
| 46 |
+
author={Stefan Andreas Baumann and Nick Stracke and Timy Phan and Bj{\"o}rn Ommer},
|
| 47 |
+
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
|
| 48 |
+
year={2025}
|
| 49 |
+
}
|
| 50 |
+
```
|