NAS3R / README.md
nielsr's picture
nielsr HF Staff
Add model card for NAS3R
0b7b898 verified
|
raw
history blame
2.23 kB
metadata
pipeline_tag: image-to-3d

NAS3R: From None to All: Self-Supervised 3D Reconstruction via Novel View Synthesis

NAS3R is a self-supervised feed-forward framework that jointly learns explicit 3D geometry and camera parameters with no ground-truth annotations and no pretrained priors.

Teaser

Installation

  1. Clone NAS3R:
git clone --recurse-submodules https://github.com/ranrhuang/NAS3R.git
cd NAS3R
  1. Create the environment (example using conda):
conda create -n nas3r python=3.11 -y
conda activate nas3r
pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
pip install -e submodules/diff-gaussian-rasterization

Pre-trained Checkpoints

Model name Training resolutions Training data Training settings
re10k_nas3r.ckpt 256x256 re10k RE10K, 2 views

Usage: Evaluation

To perform Novel View Synthesis and Pose Estimation on NAS3R (VGGT-based architecture) using the RealEstate10K dataset:

# Assuming the weight is downloaded to ./checkpoints/re10k_nas3r.ckpt
python -m src.main +experiment=nas3r/random/re10k mode=test wandb.name=re10k \
    dataset/view_sampler@dataset.re10k.view_sampler=evaluation \
    dataset.re10k.view_sampler.index_path=assets/evaluation_index_re10k.json \
    checkpointing.load=./checkpoints/re10k_nas3r.ckpt \
    test.save_image=false 

Citation

@article{huang2026nas3r,
      title={From None to All: Self-Supervised 3D Reconstruction via Novel View Synthesis},
      author={Ranran Huang and Weixun Luo and Ye Mao and Krystian Mikolajczyk},
      journal={arXiv preprint arXiv: 2603.27455},
      year={2026}
}