RunyuZhu commited on
Commit
6b873ca
Β·
verified Β·
1 Parent(s): bafbfe9

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +269 -3
README.md CHANGED
@@ -1,3 +1,269 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NAKA-GS
2
+ This pipeline was bulided base on [VGGT](https://github.com/facebookresearch/vggt) and [gsplat](https://github.com/nerfstudio-project/gsplat), thanks for their excellent works.
3
+
4
+ The Paper can be found at: https://arxiv.org/abs/2604.11142; or view the .pdf file at: https://arxiv.org/pdf/2604.11142
5
+
6
+ NAKA-GS is an end-to-end pipeline for low-light 3D scene reconstruction and novel-view synthesis:
7
+
8
+ 1. `Naka` enhances low-light training images.
9
+ 2. `VGGT` reconstructs sparse cameras and geometry from the enhanced images.
10
+ 3. `gsplat` performs Gaussian Splatting training, with optional `PPM` dense-point preprocessing.
11
+
12
+ The qualitative result (visual comparison on RealX3D) can be found at folder"asset"
13
+
14
+ ## 1. What The Pipeline Expects
15
+
16
+ Each scene directory should look like this before the first run:
17
+
18
+ ```text
19
+ data/
20
+ └── Scene1/
21
+ β”œβ”€β”€ train/ # low-light training images
22
+ β”œβ”€β”€ transforms_train.json # training camera poses
23
+ β”œβ”€β”€ transforms_test.json # render trajectory / test poses
24
+ └── test/ # optional GT test images for metrics
25
+ ```
26
+
27
+ After the pipeline runs, it will automatically create:
28
+
29
+ ```text
30
+ data/
31
+ └── Scene/
32
+ β”œβ”€β”€ images/ # Naka-enhanced images
33
+ β”œβ”€β”€ sparse/ # VGGT reconstruction outputs
34
+ β”‚ β”œβ”€β”€ cameras.bin
35
+ β”‚ β”œβ”€β”€ images.bin
36
+ β”‚ β”œβ”€β”€ points3D.bin
37
+ β”‚ └── points.ply
38
+ └── gsplat_results/ # rendering results, stats, checkpoints
39
+ ```
40
+
41
+ Notes:
42
+
43
+ - `images/`, `sparse/`, and `gsplat_results/` do not need to exist before the first run.
44
+ - `sparse/points.ply` is produced by the VGGT stage and then reused by the PPM stage.
45
+ - If a scene does not contain ground-truth test images, the pipeline still renders novel views but skips reference-image metrics.
46
+
47
+ ## 2. System Requirements
48
+
49
+ - Linux
50
+ - NVIDIA GPU
51
+ - CUDA-compatible PyTorch environment
52
+ - A working CUDA toolkit / `nvcc` visible to the environment for `gsplat` extension compilation
53
+
54
+ All experiments and internal validation for this repository were tested on an NVIDIA RTX A6000 GPU.
55
+
56
+ ## 3. Install The Environment
57
+
58
+ We recommend Conda for reproducibility.
59
+
60
+ If the unified environment in this README does not solve cleanly on your machine, use the original environment setup procedures from the two upstream components instead:
61
+
62
+ - `vggt/README.md`
63
+ - `gsplat/README.md`
64
+
65
+ In that fallback workflow, configure the `VGGT` and `gsplat` environments separately first, then return to this repository and run the unified pipeline script.
66
+
67
+ ### Option A: Conda
68
+
69
+ From the repository root:
70
+
71
+ ```bash
72
+ conda env create -f environment.yaml
73
+ conda activate naka-gs
74
+ pip install git+https://github.com/rahul-goel/fused-ssim@328dc9836f513d00c4b5bc38fe30478b4435cbb5
75
+ pip install git+https://github.com/harry7557558/fused-bilagrid@90f9788e57d3545e3a033c1038bb9986549632fe
76
+ pip install git+https://github.com/nerfstudio-project/nerfview@4538024fe0d15fd1a0e4d760f3695fc44ca72787
77
+ pip install ppisp @ git+https://github.com/nv-tlabs/ppisp@v1.0.0
78
+ ```
79
+
80
+ If your Conda solver is slow, you can use:
81
+
82
+ ```bash
83
+ conda env create -f environment.yaml --solver=libmamba
84
+ ```
85
+
86
+ ### Option B: Pip
87
+
88
+ If you already have a matching CUDA PyTorch installation:
89
+
90
+ ```bash
91
+ pip install -r requirements.txt
92
+ ```
93
+
94
+ ## 4. Download The VGGT Checkpoint
95
+
96
+ The repository does not include the `VGGT` model weight. Download the official checkpoint and place it at:
97
+
98
+ ```text
99
+ vggt/checkpoint/model.pt
100
+ ```
101
+
102
+ Official model page:
103
+
104
+ - https://huggingface.co/facebook/VGGT-1B
105
+
106
+ Direct checkpoint URL:
107
+
108
+ - https://huggingface.co/facebook/VGGT-1B/resolve/main/model.pt
109
+
110
+ Example:
111
+
112
+ ```bash
113
+ mkdir -p vggt/checkpoint
114
+ wget -O vggt/checkpoint/model.pt \
115
+ https://huggingface.co/facebook/VGGT-1B/resolve/main/model.pt
116
+ ```
117
+
118
+ ## 5. Naka Checkpoint
119
+
120
+ By default, the pipeline looks for the Naka checkpoint at:
121
+
122
+ ```text
123
+ outputs/naka/checkpoints/latest.pth
124
+ ```
125
+
126
+ ## 6. Prepare The Scene
127
+
128
+ Put your scene under `data/` or any other location you prefer. The important part is that `--scene_dir` points to the scene root.
129
+
130
+ Example:
131
+
132
+ ```text
133
+ /path/to/naka-gs/data/Scene/
134
+ β”œβ”€β”€ train/
135
+ β”œβ”€β”€ transforms_train.json
136
+ β”œβ”€β”€ transforms_test.json
137
+ └── test/ # optional
138
+ ```
139
+
140
+ `train/` is required.
141
+ `transforms_train.json` is required when using `--pose-source replace`.
142
+ `transforms_test.json` is required when using `--render-traj-path testjson`.
143
+
144
+ ## 7. Reproduce The Unified Pipeline Command
145
+
146
+ From the repository root, run:
147
+
148
+ ```bash
149
+ python run_lowlight_reconstruction.py \
150
+ --scene_dir /path/to/naka-gs/data/Your_Scene \
151
+ --pose-source replace \
152
+ --render-traj-path testjson \
153
+ --disable-viewer \
154
+ --ppm-enable \
155
+ --ppm-dense-points-path sparse/points.ply \
156
+ --ppm-align-mode none \
157
+ --ppm-voxel-size 0.01 \
158
+ --ppm-tau0 0.005 \
159
+ --ppm-beta 0.01 \
160
+ --ppm-iters 6
161
+ ```
162
+
163
+ This command runs the full pipeline:
164
+
165
+ 1. Low-light `train/` images are enhanced into `images/`.
166
+ 2. `VGGT` reconstructs the scene and writes `sparse/` plus `sparse/points.ply`.
167
+ 3. `gsplat` uses `PPM` to preprocess `sparse/points.ply`, then trains and renders the target trajectory from `transforms_test.json`.
168
+
169
+ ## 8. Example With A Local Conda Python Path
170
+
171
+ If you want to use a specific Python interpreter inside a Conda environment, the command is equivalent to:
172
+
173
+ ```bash
174
+ /path/to/conda/env/bin/python /path/to/naka-gs/run_lowlight_reconstruction.py \
175
+ --scene_dir /path/to/naka-gs/data/Your_Scene \
176
+ --pose-source replace \
177
+ --render-traj-path testjson \
178
+ --disable-viewer \
179
+ --ppm-enable \
180
+ --ppm-dense-points-path sparse/points.ply \
181
+ --ppm-align-mode none \
182
+ --ppm-voxel-size 0.01 \
183
+ --ppm-tau0 0.005 \
184
+ --ppm-beta 0.01 \
185
+ --ppm-iters 6
186
+ ```
187
+
188
+ ## 9. Main Outputs
189
+
190
+ After a successful run, check:
191
+
192
+ - `data/Laboratory/images/` for enhanced images
193
+ - `data/Laboratory/sparse/` for the VGGT sparse reconstruction
194
+ - `data/Laboratory/gsplat_results/` for rendered views, metrics, checkpoints, and logs
195
+ - `data/Laboratory/gsplat_results/pipeline_summary.json` for a stage-by-stage summary
196
+
197
+ ## 10. Useful Variants
198
+
199
+ ### Reuse Existing Enhanced Images
200
+
201
+ ```bash
202
+ python run_lowlight_reconstruction.py \
203
+ --scene_dir /path/to/scene \
204
+ --skip_naka
205
+ ```
206
+
207
+ ### Reuse Existing Sparse Reconstruction
208
+
209
+ ```bash
210
+ python run_lowlight_reconstruction.py \
211
+ --scene_dir /path/to/scene \
212
+ --skip_naka \
213
+ --skip_vggt
214
+ ```
215
+
216
+ ### Disable PPM
217
+
218
+ ```bash
219
+ python run_lowlight_reconstruction.py \
220
+ --scene_dir /path/to/scene \
221
+ --ppm-enable false
222
+ ```
223
+
224
+ ## 11. Common Issues
225
+
226
+ ### `FileNotFoundError: Naka checkpoint is required`
227
+
228
+ Provide `--naka_ckpt /path/to/latest.pth`, or place the checkpoint at the default path shown above.
229
+
230
+ ### `No enhanced images found`
231
+
232
+ Make sure `train/` contains valid image files and the Naka stage finished successfully.
233
+
234
+ ### `PPM dense point cloud is missing: .../sparse/points.ply`
235
+
236
+ This usually means the VGGT stage did not finish successfully, so `sparse/points.ply` was not generated.
237
+
238
+ ### `torch.cuda.is_available() is False`
239
+
240
+ The `gsplat` stage requires a visible CUDA GPU.
241
+
242
+ ### `gsplat` spends a long time on the first run
243
+
244
+ This is expected when the CUDA extension is compiled for the first time.
245
+
246
+ ## 12. Minimal Checklist Before Running
247
+
248
+ - Environment created successfully
249
+ - `vggt/checkpoint/model.pt` downloaded
250
+ - Naka checkpoint available, either at the default path or via `--naka_ckpt`
251
+ - Scene directory contains `train/`
252
+ - `transforms_train.json` exists for `--pose-source replace`
253
+ - `transforms_test.json` exists for `--render-traj-path testjson`
254
+
255
+ ## 13. Citation
256
+ If you find this code useful for your research, please use the following BibTeX entry.
257
+
258
+ ```text
259
+ @misc{zhu2026nakagsbionicsinspireddualbranchnaka,
260
+ title={Naka-GS: A Bionics-inspired Dual-Branch Naka Correction and Progressive Point Pruning for Low-Light 3DGS},
261
+ author={Runyu Zhu and SiXun Dong and Zhiqiang Zhang and Qingxia Ye and Zhihua Xu},
262
+ year={2026},
263
+ eprint={2604.11142},
264
+ archivePrefix={arXiv},
265
+ primaryClass={cs.CV},
266
+ url={https://arxiv.org/abs/2604.11142},
267
+ }
268
+ ```
269
+