Update README.md
Browse files
README.md
CHANGED
|
@@ -20,6 +20,8 @@ pipeline_tag: image-to-3d
|
|
| 20 |
路
|
| 21 |
<a href="https://scholar.google.com/citations?user=TE9stNgAAAAJ">Wenkang Qin</a>
|
| 22 |
路
|
|
|
|
|
|
|
| 23 |
<a href="http://www.zhengzhu.net/">Zheng Zhu</a>
|
| 24 |
路
|
| 25 |
<a href="https://donydchen.github.io">Donny Y. Chen</a>
|
|
@@ -31,40 +33,209 @@ pipeline_tag: image-to-3d
|
|
| 31 |
</p>
|
| 32 |
|
| 33 |
|
|
|
|
| 34 |
<p align="center">
|
| 35 |
<a href="">
|
| 36 |
<img src="https://lhmd.top/volsplat/assets/teaser_horizontal.jpg" alt="Logo" width="100%">
|
| 37 |
</a>
|
| 38 |
</p>
|
|
|
|
| 39 |
Pixel-aligned feed-forward 3DGS methods suffer from two primary limitations: 1) 2D feature matching struggles to effectively resolve the multi-view alignment problem, and 2) the Gaussian density is constrained and cannot be adaptively controlled according to scene complexity. We propose VolSplat, a method that directly regresses Gaussians from 3D features based on a voxel-aligned prediction strategy. This approach achieves adaptive control over scene complexity and resolves the multi-view alignment challenge.
|
| 40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
## Method
|
|
|
|
| 42 |
<p align="center">
|
| 43 |
<a href="">
|
| 44 |
<img src="https://lhmd.top/volsplat/assets/pipeline.jpg" alt="Logo" width="100%">
|
| 45 |
</a>
|
| 46 |
</p>
|
|
|
|
| 47 |
<strong>Overview of VolSplat</strong>. Given multi-view images as input, we first extract 2D features for each image using a Transformer-based network and construct per-view cost volumes with plane sweeping. Depth Prediction Module then estimates a depth map for each view, which is used to unproject the 2D features into 3D space to form a voxel feature grid. Subsequently, we employ a sparse 3D decoder to refine these features in 3D space and predict the parameters of a 3D Gaussian for each occupied voxel. Finally, novel views are rendered from the predicted 3D Gaussians.
|
| 48 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
-
##
|
| 51 |
-
|
| 52 |
-
- [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
## Citation
|
|
|
|
| 55 |
If you find our work useful for your research, please consider citing us:
|
| 56 |
|
| 57 |
```bibtex
|
| 58 |
@article{wang2025volsplat,
|
| 59 |
title={VolSplat: Rethinking Feed-Forward 3D Gaussian Splatting with Voxel-Aligned Prediction},
|
| 60 |
-
author={Wang, Weijie and Chen, Yeqing and Zhang, Zeyu and Liu, Hengyu and Wang, Haoxiao and Feng, Zhiyuan and Qin, Wenkang and Zhu, Zheng and Chen, Donny Y. and Zhuang, Bohan},
|
| 61 |
journal={arXiv preprint arXiv:2509.19297},
|
| 62 |
year={2025}
|
| 63 |
}
|
| 64 |
```
|
|
|
|
| 65 |
## Contact
|
|
|
|
| 66 |
If you have any questions, please create an issue on this repository or contact at wangweijie@zju.edu.cn.
|
| 67 |
|
| 68 |
## Acknowledgements
|
| 69 |
|
| 70 |
-
This project is developed with [DepthSplat](https://github.com/cvg/depthsplat). We thank the original authors for their excellent work.
|
|
|
|
| 20 |
路
|
| 21 |
<a href="https://scholar.google.com/citations?user=TE9stNgAAAAJ">Wenkang Qin</a>
|
| 22 |
路
|
| 23 |
+
<a href="https://scholar.google.com/citations?user=fwpY_HoAAAAJ">Feng Chen</a>
|
| 24 |
+
路
|
| 25 |
<a href="http://www.zhengzhu.net/">Zheng Zhu</a>
|
| 26 |
路
|
| 27 |
<a href="https://donydchen.github.io">Donny Y. Chen</a>
|
|
|
|
| 33 |
</p>
|
| 34 |
|
| 35 |
|
| 36 |
+
|
| 37 |
<p align="center">
|
| 38 |
<a href="">
|
| 39 |
<img src="https://lhmd.top/volsplat/assets/teaser_horizontal.jpg" alt="Logo" width="100%">
|
| 40 |
</a>
|
| 41 |
</p>
|
| 42 |
+
|
| 43 |
Pixel-aligned feed-forward 3DGS methods suffer from two primary limitations: 1) 2D feature matching struggles to effectively resolve the multi-view alignment problem, and 2) the Gaussian density is constrained and cannot be adaptively controlled according to scene complexity. We propose VolSplat, a method that directly regresses Gaussians from 3D features based on a voxel-aligned prediction strategy. This approach achieves adaptive control over scene complexity and resolves the multi-view alignment challenge.
|
| 44 |
|
| 45 |
+
## Updates
|
| 46 |
+
|
| 47 |
+
- **2026-03-11 Update:** Since the dataset links of RE10K and ACID are frequently broken, we provide preprocessed data on HuggingFace ([RE10K](https://huggingface.co/datasets/lhmd/re10k_torch) and [ACID](https://huggingface.co/datasets/lhmd/acid_torch)).
|
| 48 |
+
|
| 49 |
+
- **2025-12-21 Update:** Release our training/evaluation code and model checkpoints. We are working on a more powerful version of VolSplat. Stay tuned!
|
| 50 |
+
|
| 51 |
+
- **2025-09-23 Update:** Release our paper on arXiv.
|
| 52 |
+
|
| 53 |
## Method
|
| 54 |
+
|
| 55 |
<p align="center">
|
| 56 |
<a href="">
|
| 57 |
<img src="https://lhmd.top/volsplat/assets/pipeline.jpg" alt="Logo" width="100%">
|
| 58 |
</a>
|
| 59 |
</p>
|
| 60 |
+
|
| 61 |
<strong>Overview of VolSplat</strong>. Given multi-view images as input, we first extract 2D features for each image using a Transformer-based network and construct per-view cost volumes with plane sweeping. Depth Prediction Module then estimates a depth map for each view, which is used to unproject the 2D features into 3D space to form a voxel feature grid. Subsequently, we employ a sparse 3D decoder to refine these features in 3D space and predict the parameters of a 3D Gaussian for each occupied voxel. Finally, novel views are rendered from the predicted 3D Gaussians.
|
| 62 |
|
| 63 |
+
## Installation
|
| 64 |
+
|
| 65 |
+
Our code is developed and tested with **PyTorch 2.4.0**, **CUDA 12.1**, and **Python 3.10**.
|
| 66 |
+
|
| 67 |
+
```bash
|
| 68 |
+
conda create -n volsplat python=3.10
|
| 69 |
+
conda activate volsplat
|
| 70 |
+
|
| 71 |
+
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 xformers==0.0.27.post2 --index-url https://download.pytorch.org/whl/cu121
|
| 72 |
+
pip install -r requirements.txt
|
| 73 |
+
|
| 74 |
+
# Install MinkowskiEngine
|
| 75 |
+
# For easier installation, we made some modifications based on https://github.com/Julie-tang00/Common-envs-issues/blob/main/Cuda12-MinkowskiEngine and included it directly in our project.
|
| 76 |
+
conda install -c conda-forge openblas
|
| 77 |
+
pip install ninja
|
| 78 |
+
cd MinkowskiEngine
|
| 79 |
+
python setup.py install
|
| 80 |
+
cd ..
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
## Model Zoo
|
| 84 |
+
|
| 85 |
+
Our pre-trained models and baseline models are hosted on [Hugging Face](https://huggingface.co/lhmd/VolSplat). Please download the required models to the `./models `directory. To facilitate reproduction and comparison, we also provide pretrained weights from the baseline methods trained using same input views.
|
| 86 |
+
|
| 87 |
+
| Model | Download |
|
| 88 |
+
| --------------------------------- | ------------------------------------------------------------ |
|
| 89 |
+
| volsplat-re10k-256x256 | [download](https://huggingface.co/lhmd/VolSplat/resolve/main/volsplat-re10k-256x256.ckpt) |
|
| 90 |
+
| pixelsplat-re10k-baseline-256x256 | [download](https://huggingface.co/lhmd/VolSplat/resolve/main/pixelsplat-re10k-baseline-256x256.ckpt) |
|
| 91 |
+
| mvsplat-re10k-baseline-256x256 | [download](https://huggingface.co/lhmd/VolSplat/resolve/main/mvsplat-re10k-baseline-256x256.ckpt) |
|
| 92 |
+
| transplat-re10k-baseline-256x256 | [download](https://huggingface.co/lhmd/VolSplat/resolve/main/transplat-re10k-baseline-256x256.ckpt) |
|
| 93 |
+
| depthsplat-re10k-baseline-256x256 | [download](https://huggingface.co/lhmd/VolSplat/resolve/main/depthsplat-re10k-baseline-256x256.ckpt) |
|
| 94 |
+
| ggn-re10k-baseline-256x256 | [download](https://huggingface.co/lhmd/VolSplat/resolve/main/ggn-re10k-baseline-256x256.ckpt) |
|
| 95 |
+
|
| 96 |
+
## Datasets
|
| 97 |
+
|
| 98 |
+
### RealEstate10K / ACID
|
| 99 |
+
|
| 100 |
+
Please refer to [ZPressor](https://github.com/ziplab/ZPressor?tab=readme-ov-file#datasets) for dataset format and preprocessed versions of the datasets.
|
| 101 |
+
|
| 102 |
+
We also provide preprocessed data on HuggingFace ([RE10K](https://huggingface.co/datasets/lhmd/re10k_torch) and [ACID](https://huggingface.co/datasets/lhmd/acid_torch)).
|
| 103 |
+
|
| 104 |
+
### Scannet
|
| 105 |
+
|
| 106 |
+
For Scannet, we follow [FreeSplat](https://github.com/wangys16/FreeSplat) to train and evaluate on the 256x256 resolution.
|
| 107 |
+
|
| 108 |
+
## Training
|
| 109 |
|
| 110 |
+
### Preparation
|
| 111 |
+
|
| 112 |
+
Before training, you need to download the pre-trained [UniMatch](https://github.com/autonomousvision/unimatch) and [Depth Anything V2](https://github.com/DepthAnything/Depth-Anything-V2) weights
|
| 113 |
+
|
| 114 |
+
```
|
| 115 |
+
wget https://s3.eu-central-1.amazonaws.com/avg-projects/unimatch/pretrained/gmflow-scale1-things-e9887eda.pth -P pretrained
|
| 116 |
+
wget https://huggingface.co/depth-anything/Depth-Anything-V2-Base/resolve/main/depth_anything_v2_vitb.pth -P pretrained
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
### RealEstate10K
|
| 120 |
+
|
| 121 |
+
Run the following command to train on RealEstate10K:
|
| 122 |
+
|
| 123 |
+
```bash
|
| 124 |
+
python -m src.main +experiment=re10k \
|
| 125 |
+
data_loader.train.batch_size=1 \
|
| 126 |
+
'dataset.roots'='["datasets/re10k"]' \
|
| 127 |
+
dataset.test_chunk_interval=10 \
|
| 128 |
+
dataset.num_context_views=6 \
|
| 129 |
+
trainer.max_steps=150000 \
|
| 130 |
+
model.encoder.num_scales=2 \
|
| 131 |
+
model.encoder.upsample_factor=2 \
|
| 132 |
+
model.encoder.lowest_feature_resolution=4 \
|
| 133 |
+
model.encoder.monodepth_vit_type=vitb \
|
| 134 |
+
output_dir=outputs/re10k-256x256 \
|
| 135 |
+
wandb.project=VolSplat \
|
| 136 |
+
checkpointing.pretrained_monodepth=pretrained/pretrained_weights/depth_anything_v2_vitb.pth \
|
| 137 |
+
checkpointing.pretrained_mvdepth=pretrained/pretrained_weights/gmflow-scale1-things-e9887eda.pth
|
| 138 |
+
```
|
| 139 |
+
|
| 140 |
+
### ScanNet
|
| 141 |
+
|
| 142 |
+
To train on ScanNet, we fine-tune the model pre-trained on RealEstate10K.
|
| 143 |
+
|
| 144 |
+
```bash
|
| 145 |
+
python -m src.main +experiment=scannet \
|
| 146 |
+
data_loader.train.batch_size=1 \
|
| 147 |
+
'dataset.roots'='["datasets/scannet"]' \
|
| 148 |
+
dataset.image_shape=[256,256] \
|
| 149 |
+
trainer.max_steps=100000 \
|
| 150 |
+
trainer.val_check_interval=0.9 \
|
| 151 |
+
train.eval_model_every_n_val=40 \
|
| 152 |
+
checkpointing.every_n_train_steps=2000 \
|
| 153 |
+
model.encoder.num_scales=2 \
|
| 154 |
+
model.encoder.upsample_factor=2 \
|
| 155 |
+
model.encoder.lowest_feature_resolution=4 \
|
| 156 |
+
model.encoder.monodepth_vit_type=vitb \
|
| 157 |
+
output_dir=outputs/scannet-256x256 \
|
| 158 |
+
wandb.project=VolSplat \
|
| 159 |
+
checkpointing.pretrained_model=models/volsplat-re10k-256x256.ckpt
|
| 160 |
+
```
|
| 161 |
+
|
| 162 |
+
## Evaluation
|
| 163 |
+
|
| 164 |
+
Ensure pre-trained or downloaded models are located in `/models`.
|
| 165 |
+
|
| 166 |
+
### RealEstate10K
|
| 167 |
+
|
| 168 |
+
```bash
|
| 169 |
+
python -m src.main +experiment=re10k \
|
| 170 |
+
data_loader.train.batch_size=1 \
|
| 171 |
+
'dataset.roots'='["datasets/re10k"]' \
|
| 172 |
+
dataset.test_chunk_interval=10 \
|
| 173 |
+
dataset/view_sampler=evaluation \
|
| 174 |
+
dataset.view_sampler.num_context_views=6 \
|
| 175 |
+
dataset.view_sampler.index_path=assets/re10k_evaluation/evaluation_index_re10k.json \
|
| 176 |
+
trainer.max_steps=150000 \
|
| 177 |
+
model.encoder.num_scales=2 \
|
| 178 |
+
model.encoder.upsample_factor=2 \
|
| 179 |
+
model.encoder.lowest_feature_resolution=4 \
|
| 180 |
+
model.encoder.monodepth_vit_type=vitb \
|
| 181 |
+
mode=test \
|
| 182 |
+
test.save_video=false \
|
| 183 |
+
test.save_depth_concat_img=false \
|
| 184 |
+
test.save_image=false \
|
| 185 |
+
test.save_gt_image=false \
|
| 186 |
+
test.save_input_images=false \
|
| 187 |
+
test.save_video=false \
|
| 188 |
+
test.save_gaussian=false \
|
| 189 |
+
checkpointing.pretrained_model=models/volsplat-re10k-256x256.ckpt \
|
| 190 |
+
output_dir=outputs/volsplat-re10k-256x256-test
|
| 191 |
+
```
|
| 192 |
+
|
| 193 |
+
### ACID
|
| 194 |
+
|
| 195 |
+
We use the model trained on RealEstate10K (zero-shot) to evaluate on ACID.
|
| 196 |
+
|
| 197 |
+
```bash
|
| 198 |
+
python -m src.main +experiment=acid \
|
| 199 |
+
data_loader.train.batch_size=1 \
|
| 200 |
+
'dataset.roots'='["datasets/acid"]' \
|
| 201 |
+
dataset.test_chunk_interval=10 \
|
| 202 |
+
dataset/view_sampler=evaluation \
|
| 203 |
+
dataset.view_sampler.num_context_views=6 \
|
| 204 |
+
dataset.view_sampler.index_path=assets/acid_evaluation/evaluation_index_acid.json \
|
| 205 |
+
trainer.max_steps=150000 \
|
| 206 |
+
model.encoder.num_scales=2 \
|
| 207 |
+
model.encoder.upsample_factor=2 \
|
| 208 |
+
model.encoder.lowest_feature_resolution=4 \
|
| 209 |
+
model.encoder.monodepth_vit_type=vitb \
|
| 210 |
+
mode=test \
|
| 211 |
+
test.save_video=false \
|
| 212 |
+
test.save_depth_concat_img=false \
|
| 213 |
+
test.save_image=false \
|
| 214 |
+
test.save_gt_image=false \
|
| 215 |
+
test.save_input_images=false \
|
| 216 |
+
test.save_video=false \
|
| 217 |
+
test.save_gaussian=false \
|
| 218 |
+
checkpointing.pretrained_model=models/volsplat-re10k-256x256.ckpt \
|
| 219 |
+
output_dir=outputs/volsplat-acid-256x256-test
|
| 220 |
+
```
|
| 221 |
|
| 222 |
## Citation
|
| 223 |
+
|
| 224 |
If you find our work useful for your research, please consider citing us:
|
| 225 |
|
| 226 |
```bibtex
|
| 227 |
@article{wang2025volsplat,
|
| 228 |
title={VolSplat: Rethinking Feed-Forward 3D Gaussian Splatting with Voxel-Aligned Prediction},
|
| 229 |
+
author={Wang, Weijie and Chen, Yeqing and Zhang, Zeyu and Liu, Hengyu and Wang, Haoxiao and Feng, Zhiyuan and Qin, Wenkang and Chen, Feng and Zhu, Zheng and Chen, Donny Y. and Zhuang, Bohan},
|
| 230 |
journal={arXiv preprint arXiv:2509.19297},
|
| 231 |
year={2025}
|
| 232 |
}
|
| 233 |
```
|
| 234 |
+
|
| 235 |
## Contact
|
| 236 |
+
|
| 237 |
If you have any questions, please create an issue on this repository or contact at wangweijie@zju.edu.cn.
|
| 238 |
|
| 239 |
## Acknowledgements
|
| 240 |
|
| 241 |
+
This project is developed with [DepthSplat](https://github.com/cvg/depthsplat) and [MinkowskiEngine](https://github.com/NVIDIA/MinkowskiEngine). We thank the original authors for their excellent work.
|