Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ conda env create -f environment.yml
|
|
| 22 |
conda activate gld
|
| 23 |
|
| 24 |
# Download all checkpoints
|
| 25 |
-
|
| 26 |
|
| 27 |
# Run demo
|
| 28 |
./run_demo.sh da3
|
|
@@ -41,16 +41,5 @@ huggingface-cli download SeonghuJeon/GLD --local-dir .
|
|
| 41 |
| `pretrained_models/mae_decoder.pt` | DA3 MAE decoder (EMA, decoder-only) | 423M | 1.6G |
|
| 42 |
| `pretrained_models/vggt/mae_decoder.pt` | VGGT MAE decoder (EMA, decoder-only) | 425M | 1.6G |
|
| 43 |
|
| 44 |
-
Stage-2 and MAE decoder checkpoints contain **EMA weights only**.
|
| 45 |
MAE decoder checkpoints contain **decoder weights only** (encoder removed).
|
| 46 |
-
|
| 47 |
-
## Citation
|
| 48 |
-
|
| 49 |
-
```bibtex
|
| 50 |
-
@article{jang2026gld,
|
| 51 |
-
title={Repurposing Geometric Foundation Models for Multi-view Diffusion},
|
| 52 |
-
author={Jang, Wooseok and Jeon, Seonghu and Han, Jisang and Choi, Jinhyeok and Kwon, Minkyung and Kim, Seungryong and Xie, Saining and Liu, Sainan},
|
| 53 |
-
journal={arXiv preprint},
|
| 54 |
-
year={2026}
|
| 55 |
-
}
|
| 56 |
-
```
|
|
|
|
| 22 |
conda activate gld
|
| 23 |
|
| 24 |
# Download all checkpoints
|
| 25 |
+
python -c "from huggingface_hub import snapshot_download; snapshot_download('SeonghuJeon/GLD', local_dir='.')"
|
| 26 |
|
| 27 |
# Run demo
|
| 28 |
./run_demo.sh da3
|
|
|
|
| 41 |
| `pretrained_models/mae_decoder.pt` | DA3 MAE decoder (EMA, decoder-only) | 423M | 1.6G |
|
| 42 |
| `pretrained_models/vggt/mae_decoder.pt` | VGGT MAE decoder (EMA, decoder-only) | 425M | 1.6G |
|
| 43 |
|
| 44 |
+
Stage-2 and MAE decoder checkpoints contain **EMA weights only**.
|
| 45 |
MAE decoder checkpoints contain **decoder weights only** (encoder removed).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|