lorweb / README.md
hilamanor's picture
Update README.md
c48ad20 verified
---
license: other
license_name: nvidia-license-non-commercial
license_link: LICENSE
datasets:
- handsomeWilliam/Relation252K
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
---
# LoRWeB Model (Coming Soon)
<div align="center">
<a href="https://arxiv.org/">ArXiv</a> | <a href="https://github.com/NVlabs/LoRWeB" style="display:inline;text-decoration:underline;"><img width="20" height="20" style="display:inline;margin:0;" src="https://img.icons8.com/ios-glyphs/30/github.png" alt="github"> GitHub Repository</a> | <a href="https://research.nvidia.com/labs/par/lorweb"> ๐ŸŒ Project Website</a> | <a href="https://huggingface.co/datasets/hilamanor/LoRWeB_evalset">๐Ÿค— Evaluation Dataset (Comming Soon)</a>
</div>
<div align="center">
**Hila Manor**<sup>1,2</sup>,&ensp; **Rinon Gal**<sup>2</sup>,&ensp; **Haggai Maron**<sup>1,2</sup>,&ensp; **Tomer Michaeli**<sup>1</sup>,&ensp; **Gal Chechik**<sup>2,3</sup>
<sup>1</sup>Technion - Israel Institute of Technology &ensp;&ensp; <sup>2</sup>NVIDIA &ensp;&ensp; <sup>3</sup>Bar-Ilan University
</div>
<div align="center">
<img src="https://github.com/NVlabs/LoRWeB/raw/main/assets/teaser.jpg" alt="Teaser" width="800"/>
<i>Given a prompt and an image triplet {**a**, **a'**, **b**} that visually describe a desired transformation, LoRWeB dynamically constructs a single LoRA from a learnable basis of LoRA modules, and produces an editing result **b'** that applies the same analogy to the new image.</i>
</div>
### โ„น๏ธ Additional Information
Please see our full modelcard and further details in the [GitHub Repo](https://github.com/NVlabs/LoRWeB)
## ๐Ÿ“š Citation
If you use this model in your research, please cite:
```bibtex
@article{manor2026lorweb,
title={Spanning the Visual Analogy Space with a Weight Basis of LoRAs},
author={Manor, Hila and Gal, Rinon and Maron, Haggai and Michaeli, Tomer and Chechik, Gal},
journal={arXiv preprint},
year={2026}
}
```