File size: 6,397 Bytes
a07f371 8d246a5 63ad196 bf07855 63ad196 13c8e44 8d246a5 9a867f4 765c8ff 13c8e44 8d246a5 13c8e44 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
---
license: mit
pipeline_tag: image-to-image
library_name: diffusers
---
# [ICCV 2025] Towards Open-World Generation of Stereo Images and Unsupervised Matching
[](https://qjizhi.github.io/genstereo) [](https://huggingface.co/spaces/FQiao/GenStereo) [](https://github.com/Qjizhi/GenStereo) [](https://huggingface.co/FQiao/GenStereo-sd2.1/tree/main) [](https://arxiv.org/abs/2503.12720)
This repository contains the model presented in [Towards Open-World Generation of Stereo Images and Unsupervised Matching](https://huggingface.co/papers/2503.12720). The models are finetuned on Stable Diffusion 1.5, for SD v2.1, you can find [here](https://huggingface.co/FQiao/GenStereo-sd2.1).

## Abstract
Stereo images are fundamental to numerous applications, including extended reality (XR) devices, autonomous driving, and robotics. Unfortunately, acquiring high-quality stereo images remains challenging due to the precise calibration requirements of dual-camera setups and the complexity of obtaining accurate, dense disparity maps. Existing stereo image generation methods typically focus on either visual quality for viewing or geometric accuracy for matching, but not both. We introduce GenStereo, a diffusion-based approach, to bridge this gap. The method includes two primary innovations (1) conditioning the diffusion process on a disparity-aware coordinate embedding and a warped input image, allowing for more precise stereo alignment than previous methods, and (2) an adaptive fusion mechanism that intelligently combines the diffusion-generated image with a warped image, improving both realism and disparity consistency. Through extensive training on 11 diverse stereo datasets, GenStereo demonstrates strong generalization ability. GenStereo achieves state-of-the-art performance in both stereo image generation and unsupervised stereo matching tasks.
## How to use
### Environment
We tested our codes on Ubuntu with nVidia A100 GPU. If you're using other machines like Windows, consider using Docker. You can either add packages to your python environment or use Docker to build an python environment. Commands below are all expected to run in the root directory of the repository.
We tested the environment with python `>=3.10` and CUDA `=11.8`. To add mandatory dependencies run the command below.
``` shell
pip install -r requirements.txt
```
To run developmental codes such as the example provided in jupyter notebook and the live demo implemented by gradio, add extra dependencies via the command below.
``` shell
pip install -r requirements_dev.txt
```
### Download pretrained models
GenStereo uses pretrained models which consist of both our finetuned models and publicly available third-party ones. Download all the models to `checkpoints` directory or anywhere of your choice. You can do it manually or by the [download_models.sh](scripts/download_models.sh) script.
#### Download script
``` shell
bash scripts/download_models.sh
```
#### Manual download
> [!NOTE]
> Models and checkpoints provided below may be distributed under different licenses. Users are required to check licenses carefully on their behalf.
1. Our finetuned models, we provide two versions of GenStereo
- v1.5: 512px, faster, [model card](https://huggingface.co/FQiao/GenStereo).
- v2.1: 768px, better performance, high resolution, takes more time, [model card](https://huggingface.co/FQiao/GenStereo-sd2.1).
2. Pretrained models:
- [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)
- download `config.json` and `diffusion_pytorch_model.safetensors` to `checkpoints/sd-vae-ft-mse`
- [sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
- download `image_encoder/config.json` and `image_encoder/pytorch_model.bin` to `checkpoints/image_encoder`
3. MDE (Monocular Depth Estimation) models
- We use [Depth Anything V2](https://github.com/DepthAnything/Depth-Anything-V2) as the MDE model and get the disparity maps.
The final `checkpoints` directory must look like this:
```
.
βββ depth_anything_v2_vitl.pth
βββ genstereo-v1.5
β βββ config.json
β βββ denoising_unet.pth
β βββ fusion_layer.pth
β βββ pose_guider.pth
β βββ reference_unet.pth
βββ genstereo-v2.1
β βββ config.json
β βββ denoising_unet.pth
β βββ fusion_layer.pth
β βββ pose_guider.pth
β βββ reference_unet.pth
βββ image_encoder
β βββ config.json
β βββ pytorch_model.bin
βββ sd-vae-ft-mse
βββ config.json
βββ diffusion_pytorch_model.safetensors
```
### Inference
You can easily run the inference code by running the following command, and the results will be save under `./vis` folder.
```bash
python test.py /path/to/your/image
```
### Gradio live demo
An interactive live demo is also available. Start gradio demo by running the command below, and goto [http://127.0.0.1:7860/](http://127.0.0.1:7860/)
If you are running it on the server, be sure to forward the port 7860.
Or you can just visit [Spaces](https://huggingface.co/spaces/FQiao/GenStereo) hosted by Hugging Face to try it now.
```shell
python app.py
```
## Train
Please read [Train_Guide.md](./Trian_Guide.md).
## Citation
``` bibtex
@inproceedings{qiao2025genstereo,
author = {Qiao, Feng and Xiong, Zhexiao and Xing, Eric and Jacobs, Nathan},
title = {Towards Open-World Generation of Stereo Images and Unsupervised Matching},
booktitle = {Proceedings of the {IEEE/CVF} International Conference on Computer Vision ({ICCV})},
year = {2025},
eprint = {2503.12720},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
```
## Acknowledgements
Our codes are based on [GenWarp](https://github.com/sony/genwarp), [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone) and other repositories. We thank the authors of relevant repositories and papers.
|