Instructions to use HighCWu/FLUX.1-dev-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use HighCWu/FLUX.1-dev-4bit with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("HighCWu/FLUX.1-dev-4bit", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Here is some code for using LORAs with this
#7
by APIS-AI - opened
This comment has been hidden
APIS-AI changed discussion title from To use Loras to To use Loras with this
APIS-AI changed discussion title from To use Loras with this to Here is some code for using LORAs with this
Using lora with this model is very interesting. Im going to try this tomorrow, I Am a beginner in this theme, so I will be happy to see any help. Thanks!
This comment has been hidden
Just use adapters exactly as you would with SDXL.
load up all your weights and give them names
lorapath="whatever_directory/your_lora.safetensors"
model.load_lora_weights(lorapath,
weight_name=lorapath,
torch_dtype=torch.float16,
local_files_only=True,
use_safetensors=True,
adapter_name=whatever_you_want_to_name_it,
variant="fp16",
vae=model.vae,
)
Then set them as active and set weights
list_of_lora_names=["lora1", "another_lora", "etc"]
list_of_lora_weights = [1.0,.8,.7]
model.set_adapters(list_of_lora_names,adapter_weights=list_of_lora_weights)
You can show what loras are currently loaded:
current_loras = model.get_active_adapters()
and you can delete specific ones:
to_remove=["another_lora","etc"]
model.delete_adapters(to_remove)