Image-to-Image
MVGenMaster / README.md
ewrfcas's picture
Update README.md
be72d33 verified
---
license: apache-2.0
pipeline_tag: image-to-image
---
# [CVPR2025] MVGenMaster: Scaling Multi-View Generation from Any Image via 3D Priors Enhanced Diffusion Model
[**Project Page**](https://ewrfcas.github.io/MVGenMaster/) | [**ArXiv**](https://arxiv.org/abs/2411.16157) | [**GitHub**](ewrfcas.github.io/MVGenMaster/)
If you found our project helpful, please consider citing:
```bibtex
@inproceedings{cao2025mvgenmaster,
title={MVGenMaster: Scaling Multi-View Generation from Any Image via 3D Priors Enhanced Diffusion Model},
author={Cao, Chenjie and Yu, Chaohui and Liu, Shang and Wang, Fan and Xue, Xiangyang and Fu, Yanwei},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={6045--6056},
year={2025}
}
```
# Extended works:
### GaMO: Geometry-aware Multi-view Diffusion Outpainting for Sparse-View 3D Reconstruction
[**Project Page**](https://yichuanh.github.io/GaMO/) | [**ArXiv**](https://huggingface.co/papers/2512.25073) | [**GitHub**](https://github.com/yichuanH/GaMO_official)
GaMO (Geometry-aware Multi-view Outpainter) is a framework that reformulates sparse-view reconstruction through multi-view outpainting. Instead of generating new viewpoints, GaMO expands the field of view from existing camera poses, which inherently preserves geometric consistency while providing broader scene coverage.
Our approach employs multi-view conditioning and geometry-aware denoising strategies in a zero-shot manner without training. Extensive experiments on Replica and ScanNet++ demonstrate state-of-the-art reconstruction quality across 3, 6, and 9 input views, outperforming prior methods in PSNR and LPIPS, while achieving a 25× speedup over SOTA diffusion-based methods.
## Citation
If you find this work useful, please consider citing:
```bibtex
@article{huang2025gamo,
title={GaMO: Geometry-aware Multi-view Diffusion Outpainting for Sparse-View 3D Reconstruction},
author={Huang, Yi-Chuan and Chien, Hao-Jen and Lin, Chin-Yang and Chen, Ying-Huan and Liu, Yu-Lun},
journal={arXiv preprint arXiv:2512.25073},
year={2025}
}
```