Where to place?
Is it enough to place the Lora qwen-360-diffusion-int8-bf16-v1.safetensors file in the Lora comfyui folder, or do I need to create a folder called qwen-360-diffusion with all the files inside: create_360_sweep_frames.py run_qwem_image_int8.py... and place it in the Lora folder?
@MarioLuna If you are using ComfyUI, then you simply place the model in ComfyUI's lora folder, and then use it with your desired workflow. The provided scripts in this repo are separate examples for researchers to use if they want.
Pl
@MarioLuna If you are using ComfyUI, then you simply place the model in ComfyUI's lora folder, and then use it with your desired workflow. The provided scripts in this repo are separate examples for researchers to use if they want.
Please; Could you share the link to download the qwen_2.5_vl_7b.safetensors? I have tried with the qwen_2.5_vl_7b_fp8_scaled.safetensors and produces a black image as a result.
@MarioLuna If you are using ComfyUI, I recommend that you use GGUF to maximize quality with the 'qwen-360-diffusion-int8-bf16-v1.safetensors '. Otherwise if you use ComfyUI's fp8 version of transformer, then you'll need to use the 'qwen-360-diffusion-int4-bf16-v1.safetensors' and 'qwen-360-diffusion-int4-bf16-v1-b.safetensors' LoRA models to avoid poor quality outputs.
Install the GGUF extension: https://github.com/city96/ComfyUI-GGUF
Use the GGUF Q8 model: https://huggingface.co/city96/Qwen-Image-gguf/blob/main/qwen-image-Q8_0.gguf
Then use either 'qwen_2.5_vl_7b.safetensors' (best quality) or 'qwen_2.5_vl_7b_fp8_scaled.safetensors' from here: https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/text_encoders
Hi
A lot of thanks for your great help.
I tried using the qwen_2.5_vl_7b.safetensors model, and it's still producing a black image. I don't understand why.
I have the same models as your workflow; everything is the same.
@MarioLuna it sounds like there may be a problem with your comfyui install then, if all the models are correctly in place.