gvs_benchmark / README.md
ndsong's picture
Update README.md
475ed41 verified
---
license: mit
---
<h1 align="center">Generative View Stitching</h1>
<p align="center">
<p align="center">
<a href="https://andrewsonga.github.io/">Chonghyuk (ND) Song</a><sup>1</sup>
·
<a href="https://michal-stary.github.io/">Michal Stary</a><sup>1</sup>
·
<a href="https://boyuan.space/">Boyuan Chen</a><sup>1</sup>
·
<a href="https://grgkopanas.github.io/">George Kopanas</a><sup>2</sup>
·
<a href="https://vincentsitzmann.com/">Vincent Sitzmann</a><sup>1</sup>
<br/>
<sup>1</sup>MIT CSAIL, Scene Representation Group <sup>2</sup>Runway ML
</p>
<h3 align="center"><a href="https://arxiv.org/abs/2510.24718">Paper</a> | <a href="https://andrewsonga.github.io/gvs/">Website</a> | <a href="https://github.com/andrewsonga/generative_view_stitching">GitHub</a> </h3>
</p>
This is the official benchmark for the paper [**_Generative View Stitching_**](https://arxiv.org/abs/2510.24718) (GVS), which enables <i>collision-free</i> camera-guided video generation for <i>predefined</i> trajectories, and presents a <i>non-autoregressive</i> alternative to video length extrapolation.
![cover_figure](cover_figure.png)
## 🚀 Usage
This benchmark is comprised of camera trajectories designed to test various video generation capabilities, including video length extrapolation, loop closures, and collision avoidance.
To run GVS on this benchmark, please visit <a href="https://github.com/andrewsonga/generative_view_stitching">our GitHub repository</a> for further instructions.
## 📌 Citation
If our work is useful for your research, please consider giving us a star and citing our paper:
```bibtex
@article{song2025gvs,
title={Generative View Stitching},
author={Song, Chonghyuk and Stary, Michal and Chen, Boyuan and Kopanas, George and Sitzmann, Vincent},
journal={arXiv preprint arXiv:2510.24718},
year={2025},
}
```