witcherderivia commited on
Commit
e4359e9
·
verified ·
1 Parent(s): 75ea6f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -7,14 +7,12 @@ tags:
7
  - StyleTransfer
8
  - QwenImageEdit
9
  ---
10
-
11
  # QwenStyle: Content-Preserving Style Transfer with Qwen-Image-Edit
12
 
13
  For the first time, we introduce Content-Preserving Style Transfer functionality to Qwen-Image-Edit, which supports transferring various style cues from style reference to content reference while preserving the characteristics of content reference in high efficiency, i.e. 4 sampling steps.
14
 
15
- Please note that our style transfer model is based on Qwen-Image-Edit-2509, and has to be used with Qwen-Image-Lightning Lora, which we have converted to Diffsynth format for compatibility. Otherwise, the model may suffer from either low-speed or low-quality.
16
-
17
-
18
 
19
 
20
  ## Quick Start
@@ -27,22 +25,24 @@ cd DiffSynth-Studio
27
  pip install -e .
28
  ```
29
 
 
 
30
  Then run infer_style_transfer.py for inference. We have tested the model on one H100, which takes 5 seconds to generate the result.
31
 
32
 
33
  ## Training
34
 
35
- Our training framework is based on DiffSynth-Studio. Special thanks to the authors of DiffSynth.
36
 
37
  ## Data
38
 
39
- We will soon open-source all our training data.
40
 
41
 
42
 
43
  ## Citation
44
 
45
- We will release the tech report of QwenStyle. Please cite our work if you find it helpful.
46
 
47
  ```bibtex
48
  @article{zhang2026qwenstyle,
@@ -51,4 +51,4 @@ We will release the tech report of QwenStyle. Please cite our work if you find i
51
  journal={TeleAI},
52
  year={2026}
53
  }
54
- ```
 
7
  - StyleTransfer
8
  - QwenImageEdit
9
  ---
 
10
  # QwenStyle: Content-Preserving Style Transfer with Qwen-Image-Edit
11
 
12
  For the first time, we introduce Content-Preserving Style Transfer functionality to Qwen-Image-Edit, which supports transferring various style cues from style reference to content reference while preserving the characteristics of content reference in high efficiency, i.e. 4 sampling steps.
13
 
14
+ Please note that our style transfer model is based on [Qwen-Image-Edit-2509](https://huggingface.co/Qwen/Qwen-Image-Edit-2509), and has to be used with [Qwen-Image-Lightning Lora](https://huggingface.co/lightx2v/Qwen-Image-Lightning), which we have converted to Diffsynth format for compatibility. Otherwise, the model may suffer from either low-speed or low-quality.
15
+ Our github page is [QwenStyle](https://github.com/witcherofresearch/Qwen-Image-Style-Transfer).
 
16
 
17
 
18
  ## Quick Start
 
25
  pip install -e .
26
  ```
27
 
28
+ Please download our style transfer lora and lightning lora from [this link](https://huggingface.co/witcherderivia/Qwen-Image-Style-Transfer/)
29
+
30
  Then run infer_style_transfer.py for inference. We have tested the model on one H100, which takes 5 seconds to generate the result.
31
 
32
 
33
  ## Training
34
 
35
+ Our training framework is based on [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio). Special thanks to the authors of DiffSynth.
36
 
37
  ## Data
38
 
39
+ We will open-source all our training data if the stars exceed 200.
40
 
41
 
42
 
43
  ## Citation
44
 
45
+ We release the tech report of [QwenStyle V1](https://openreview.net/forum?id=Cgb7JpOA5Q&referrer=%5Bthe%20profile%20of%20Shiwen%20Zhang%5D(%2Fprofile%3Fid%3D~Shiwen_Zhang1)). We are keep refining QwenStyle and will update new versions in the future. Please light a star for our project and cite our work if you find it helpful.
46
 
47
  ```bibtex
48
  @article{zhang2026qwenstyle,
 
51
  journal={TeleAI},
52
  year={2026}
53
  }
54
+ ```