Image-to-3D
checkpoint
sai-dev commited on
Commit
1b32a6c
·
1 Parent(s): 3244fd9

docs: add paper metadata, quickstart, and citation

Browse files
Files changed (1) hide show
  1. README.md +38 -3
README.md CHANGED
@@ -20,6 +20,8 @@ Please note: For individuals or organizations generating annual revenue of US $1
20
 
21
  * **Developed by**: [Stability AI](https://stability.ai/)
22
  * **Model type**: Transformer multi-view image-to-3D model
 
 
23
  * **Model details**: ReLi3D is trained to reconstruct a relightable 3D mesh from multiple 512x512 object images with known camera poses. The model outputs UV-unwrapped geometry and texture, and predicts material properties such as roughness and metallic values, together with an estimated illumination representation for downstream rendering workflows.
24
 
25
  ### License
@@ -30,7 +32,7 @@ Please note: For individuals or organizations generating annual revenue of US $1
30
 
31
  * **Repository**: https://github.com/Stability-AI/ReLi3D
32
  * **Project page**: https://reli3d.jdihlmann.com/
33
- * **arXiv page**: Coming soon
34
 
35
  ### Files
36
 
@@ -41,9 +43,31 @@ Please note: For individuals or organizations generating annual revenue of US $1
41
 
42
  The training process uses renders from Objaverse and curated subsets of additional sources with license review and filtering for training suitability.
43
 
44
- ## Usage
45
 
46
- For usage instructions, please refer to the [ReLi3D GitHub repository](https://github.com/Stability-AI/ReLi3D).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ### Intended Uses
49
 
@@ -59,6 +83,17 @@ All uses of the model should be in accordance with our [Acceptable Use Policy](h
59
 
60
  The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model.
61
 
 
 
 
 
 
 
 
 
 
 
 
62
  ## Safety
63
 
64
  As part of our safety-by-design and responsible AI deployment approach, we implement safety measures throughout the development of our models, from the time we begin pre-training a model to the ongoing development, fine-tuning, and deployment of each model. We have implemented a number of safety mitigations that are intended to reduce the risk of severe harms. However, we recommend that developers conduct their own testing and apply additional mitigations based on their specific use cases.
 
20
 
21
  * **Developed by**: [Stability AI](https://stability.ai/)
22
  * **Model type**: Transformer multi-view image-to-3D model
23
+ * **Paper**: [ReLi3D: Relightable Multi-view 3D Reconstruction with Disentangled Illumination](https://arxiv.org/abs/2603.19753)
24
+ * **Authors**: Jan-Niklas Dihlmann, Mark Boss, Simon Donne, Andreas Engelhardt, Hendrik P. A. Lensch, Varun Jampani
25
  * **Model details**: ReLi3D is trained to reconstruct a relightable 3D mesh from multiple 512x512 object images with known camera poses. The model outputs UV-unwrapped geometry and texture, and predicts material properties such as roughness and metallic values, together with an estimated illumination representation for downstream rendering workflows.
26
 
27
  ### License
 
32
 
33
  * **Repository**: https://github.com/Stability-AI/ReLi3D
34
  * **Project page**: https://reli3d.jdihlmann.com/
35
+ * **arXiv paper**: https://arxiv.org/abs/2603.19753
36
 
37
  ### Files
38
 
 
43
 
44
  The training process uses renders from Objaverse and curated subsets of additional sources with license review and filtering for training suitability.
45
 
46
+ ## Quickstart
47
 
48
+ ```bash
49
+ git clone https://github.com/Stability-AI/ReLi3D.git
50
+ cd ReLi3D
51
+
52
+ python3.10 -m venv .venv
53
+ source .venv/bin/activate
54
+ pip install --upgrade pip
55
+ pip install -r requirements.txt
56
+ pip install ./native/uv_unwrapper ./native/texture_baker
57
+
58
+ huggingface-cli login
59
+ python scripts/download_model_from_hf.py \
60
+ --repo-id StabilityLabs/ReLi3D \
61
+ --output-dir artifacts/model
62
+
63
+ python demos/reli3d/infer_from_transforms.py \
64
+ --input-root demo_files/objects \
65
+ --objects Camera_01 \
66
+ --output-root outputs \
67
+ --overwrite
68
+ ```
69
+
70
+ For full usage instructions, please refer to the [ReLi3D GitHub repository](https://github.com/Stability-AI/ReLi3D).
71
 
72
  ### Intended Uses
73
 
 
83
 
84
  The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model.
85
 
86
+ ## Citation
87
+
88
+ ```bibtex
89
+ @inproceeding{dihlmann2026reli3d,
90
+ author = {Dihlmann, Jan-Niklas and Boss, Mark and Donne, Simon and Engelhardt, Andreas and Lensch, Hendrik P. A. and Jampani, Varun},
91
+ title = {ReLi3D: Relightable Multi-view 3D Reconstruction with Disentangled Illumination},
92
+ booktitle = {International Conference on Learning Representations (ICLR)},
93
+ year = {2026}
94
+ }
95
+ ```
96
+
97
  ## Safety
98
 
99
  As part of our safety-by-design and responsible AI deployment approach, we implement safety measures throughout the development of our models, from the time we begin pre-training a model to the ongoing development, fine-tuning, and deployment of each model. We have implemented a number of safety mitigations that are intended to reduce the risk of severe harms. However, we recommend that developers conduct their own testing and apply additional mitigations based on their specific use cases.