nielsr HF Staff commited on
Commit
db7f14b
·
verified ·
1 Parent(s): 43c8887

Improve model card: add paper link, abstract, and library name

Browse files

This PR enhances the model card by:
- Adding a direct link to the paper: https://huggingface.co/papers/2507.01305
- Including the full paper abstract.
- Adding the `library_name: diffusers` to the metadata, which enables the "how to use" button on the model page.

Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -1,8 +1,9 @@
1
  ---
2
- license: mit
3
  base_model:
4
  - stabilityai/stable-diffusion-xl-base-1.0
 
5
  pipeline_tag: image-to-image
 
6
  tags:
7
  - image-to-image
8
  - StableDiffusionXLControlNetInpaintPipeline
@@ -13,17 +14,19 @@ tags:
13
  - inpainting
14
  - light-estimation
15
  - relighting
16
- instance_prompt: >-
17
- a perfect mirrored reflective chrome ball sphere, a perfect black dark
18
- mirrored reflective chrome ball sphere
19
  inference: false
20
  ---
21
 
22
  # DiffusionLight's TurboLoRA
23
- [Project Page](https://diffusionlight.github.io/turbo) | [Code](https://github.com/DiffusionLight/DiffusionLight-Turbo-diffusers)
24
 
25
  [![Open DiffusionLight in Colab](https://colab.research.google.com/drive/1UcSp9mj77ZXAyTCvkcXVvC3DYznEyLwZ?usp=sharing&sandboxMode=true#scrollTo=k2pTDk79bMQI&forceEdit=true&sandboxMode=true)
26
 
 
 
 
27
  <table>
28
  <thead>
29
  <tr>
@@ -91,7 +94,7 @@ inference: false
91
 
92
  This repository is the weight for an example code that shows how to implement DiffusionLight-Turbo `inpaint.py` in Diffusers `0.33.0`
93
 
94
- We recommend checking out Github Repository: [Github](https://github.com/DiffusionLight/DiffusionLight-Turbo-diffusers)
95
 
96
  Note that the Weight is the same as 'DiffusionLight/Flicker2k', but due to diffusers 0.33.0 breaking the compatibility, we have to store it in a separate model.
97
 
@@ -120,4 +123,4 @@ Weights for this model are available in Safetensors format.
120
  ```
121
 
122
  ## Visit us 🦉
123
- [![Vision & Learning Laboratory](https://i.imgur.com/hQhkKhG.png)](https://vistec.ist/vision) [![VISTEC - Vidyasirimedhi Institute of Science and Technology](https://i.imgur.com/4wh8HQd.png)](https://vistec.ist/)
 
1
  ---
 
2
  base_model:
3
  - stabilityai/stable-diffusion-xl-base-1.0
4
+ license: mit
5
  pipeline_tag: image-to-image
6
+ library_name: diffusers
7
  tags:
8
  - image-to-image
9
  - StableDiffusionXLControlNetInpaintPipeline
 
14
  - inpainting
15
  - light-estimation
16
  - relighting
17
+ instance_prompt: a perfect mirrored reflective chrome ball sphere, a perfect black
18
+ dark mirrored reflective chrome ball sphere
 
19
  inference: false
20
  ---
21
 
22
  # DiffusionLight's TurboLoRA
23
+ [📖 Paper](https://huggingface.co/papers/2507.01305) | [Project Page](https://diffusionlight.github.io/turbo) | [Code](https://github.com/DiffusionLight/DiffusionLight-Turbo-diffusers)
24
 
25
  [![Open DiffusionLight in Colab](https://colab.research.google.com/drive/1UcSp9mj77ZXAyTCvkcXVvC3DYznEyLwZ?usp=sharing&sandboxMode=true#scrollTo=k2pTDk79bMQI&forceEdit=true&sandboxMode=true)
26
 
27
+ ## Abstract
28
+ We introduce a simple yet effective technique for estimating lighting from a single low-dynamic-range (LDR) image by reframing the task as a chrome ball inpainting problem. This approach leverages a pre-trained diffusion model, Stable Diffusion XL, to overcome the generalization failures of existing methods that rely on limited HDR panorama datasets. While conceptually simple, the task remains challenging because diffusion models often insert incorrect or inconsistent content and cannot readily generate chrome balls in HDR format. Our analysis reveals that the inpainting process is highly sensitive to the initial noise in the diffusion process, occasionally resulting in unrealistic outputs. To address this, we first introduce DiffusionLight, which uses iterative inpainting to compute a median chrome ball from multiple outputs to serve as a stable, low-frequency lighting prior that guides the generation of a high-quality final result. To generate high-dynamic-range (HDR) light probes, an Exposure LoRA is fine-tuned to create LDR images at multiple exposure values, which are then merged. While effective, DiffusionLight is time-intensive, requiring approximately 30 minutes per estimation. To reduce this overhead, we introduce DiffusionLight-Turbo, which reduces the runtime to about 30 seconds with minimal quality loss. This 60x speedup is achieved by training a Turbo LoRA to directly predict the averaged chrome balls from the iterative process. Inference is further streamlined into a single denoising pass using a LoRA swapping technique. Experimental results that show our method produces convincing light estimates across diverse settings and demonstrates superior generalization to in-the-wild scenarios. Our code is available at this https URL
29
+
30
  <table>
31
  <thead>
32
  <tr>
 
94
 
95
  This repository is the weight for an example code that shows how to implement DiffusionLight-Turbo `inpaint.py` in Diffusers `0.33.0`
96
 
97
+ We recommend checking out Github Repository: [Github](https://github.com/DiffusionLight/DiffusionLight-Turbo-diffusers)
98
 
99
  Note that the Weight is the same as 'DiffusionLight/Flicker2k', but due to diffusers 0.33.0 breaking the compatibility, we have to store it in a separate model.
100
 
 
123
  ```
124
 
125
  ## Visit us 🦉
126
+ [![Vision & Learning Laboratory](https://i.imgur.com/hQhkKhG.png)](https://vistec.ist/vision) [![VISTEC - Vidyasirimedhi Institute of Science and Technology](https://i.imgur.com/4wh8HQd.png)](https://vistec.ist/)