Other
English
ragor commited on
Commit
26daae4
·
verified ·
1 Parent(s): 2376ee3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -3
README.md CHANGED
@@ -1,3 +1,49 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ language:
4
+ - en
5
+ pipeline_tag: image-to-video
6
+ ---
7
+
8
+ # MYRIAD (Envisioning the Future, One Step at a Time)
9
+ [![Project Page](https://img.shields.io/badge/Project-Page-blue)](https://compvis.github.io/myriad)
10
+ [![Paper](https://img.shields.io/badge/arXiv-paper-b31b1b)](_blank)
11
+ [![OWM-95](https://img.shields.io/badge/HuggingFace-Dataset-yellow)](https://huggingface.co/datasets/CompVis/owm-95)
12
+
13
+ ## Paper and Abstract
14
+
15
+ The MYRIAD (Motion hYpothesis Reasoning via Iterative Autoregressive Diffusion) model was presented in the paper [Envisioning the Future, One Step at a Time](_blank).
16
+
17
+ From a single image, MYRIAD predicts distributions over sparse point trajectories autoregressively. This allows to predict consistent futures in open-set environments and plan actions by exploring a large number of counterfacual interactions.
18
+
19
+ ## Project Page and Code
20
+
21
+ - **Project Page**: https://compvis.github.io/myriad
22
+ - **GitHub Repository**: https://github.com/CompVis/flow-poke-transformer
23
+
24
+ ## Usage
25
+
26
+ For programmatic use, the simplest way to use MYRIAD is via `torch.hub`:
27
+
28
+ ```python
29
+ myriad_openset = torch.hub.load("CompVis/myriad", "myriad_openset")
30
+
31
+ myriad_billiard = torch.hub.load("CompVis/myriad", "myriad_billiard")
32
+ ```
33
+
34
+ If you wish to integrate MYRIAD in your own codebase, you can copy `model.py` and `dinov3.py` from the [GitHub repository](https://github.com/CompVis/flow-poke-transformer).
35
+ The `MyriadStepByStep` class contains a `predict_simulate` method for unrolling trajectories and a low-level `forward` method to predict distributions for previously observed trajectories.
36
+
37
+ ## Citation
38
+
39
+ If you find our model or code useful, please cite our paper:
40
+
41
+ ```bibtex
42
+ @inproceedings{baumann2026envisioning,
43
+ title={Envisioning the Future, One Step at a Time},
44
+ author={Baumann, Stefan Andreas and Wiese, Jannik and Martorella, Tommaso and Kalayeh, Mahdi M. and Ommer, Bjorn},
45
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
46
+ year={2026}
47
+ }
48
+ ```
49
+