Q-Series Sketch
Collection
Q(n) β’ 7 items β’ Updated β’ 1
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("strangerzonehf/Qx-Art")
prompt = "Qx-Art, A black and white drawing of a fox on a white paper. The fox has a long tail, a long snout, and a small black eye. There is a black string tied around the foxs neck, and the fox has two small black circles on the front of its head. There are two long legs on the left side of the fox, and one on the right side."
image = pipe(prompt).images[0]





Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---|---|---|---|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 18 & 2630 |
| Epoch | 17 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 19
| Dimensions | Aspect Ratio | Recommendation |
|---|---|---|
| 1280 x 832 | 3:2 | Best |
| 1024 x 1024 | 1:1 | Default |
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "strangerzonehf/Qx-Art"
trigger_word = "Qx-Art"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
You should use Qx-Art to trigger the image generation.
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Base model
black-forest-labs/FLUX.1-dev