File size: 2,289 Bytes
c48ad20 0ef124c c48ad20 0ef124c c48ad20 0ef124c c48ad20 0ef124c c48ad20 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | ---
license: other
license_name: nvidia-license-non-commercial
license_link: LICENSE
datasets:
- handsomeWilliam/Relation252K
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
---
# LoRWeB Model
<div align="center">
<a href="https://arxiv.org/abs/2602.15727">ArXiv</a> | <a href="https://github.com/NVlabs/LoRWeB" style="display:inline;text-decoration:underline;"><img width="20" height="20" style="display:inline;margin:0;" src="https://img.icons8.com/ios-glyphs/30/github.png" alt="github"> GitHub Repository</a> | <a href="https://research.nvidia.com/labs/par/lorweb"> 🌐 Project Website</a> | <a href="https://huggingface.co/datasets/hilamanor/LoRWeB_evalset">🤗 Evaluation Dataset</a>
</div>
<div align="center">
**Hila Manor**<sup>1,2</sup>,  **Rinon Gal**<sup>2</sup>,  **Haggai Maron**<sup>1,2</sup>,  **Tomer Michaeli**<sup>1</sup>,  **Gal Chechik**<sup>2,3</sup>
<sup>1</sup>Technion - Israel Institute of Technology    <sup>2</sup>NVIDIA    <sup>3</sup>Bar-Ilan University
</div>
<div align="center">
<img src="https://github.com/NVlabs/LoRWeB/raw/main/assets/teaser.jpg" alt="Teaser" width="800"/>
<i>Given a prompt and an image triplet {**a**, **a'**, **b**} that visually describe a desired transformation, LoRWeB dynamically constructs a single LoRA from a learnable basis of LoRA modules, and produces an editing result **b'** that applies the same analogy to the new image.</i>
</div>
### ℹ️ Additional Information
**This model is a reproduction of the original model from the paper. It was trained from scratch using Technion resources.** This might introduce differences from the results reported in the paper. Please see the `samples` directory for examples of this model's outputs on the {**a**, **a'**, **b**} triplets from the teaser figure.
Please see our full modelcard and further details in the [GitHub Repo](https://github.com/NVlabs/LoRWeB).
## 📚 Citation
If you use this model in your research, please cite:
```bibtex
@article{manor2026lorweb,
title={Spanning the Visual Analogy Space with a Weight Basis of LoRAs},
author={Manor, Hila and Gal, Rinon and Maron, Haggai and Michaeli, Tomer and Chechik, Gal},
journal={arXiv preprint arXiv:2602.15727},
year={2026}
}
```
|