test / checkpoint-1600 /README.md
quzo's picture
Upload 24 files
1680d6e verified
metadata
license: other
base_model: black-forest-labs/FLUX.2-dev
tags:
  - flux2
  - flux2-diffusers
  - text-to-image
  - image-to-image
  - diffusers
  - simpletuner
  - not-for-all-audiences
  - lora
  - template:sd-lora
  - standard
pipeline_tag: text-to-image
inference: true

quzo/fl2

This is a PEFT LoRA derived from black-forest-labs/FLUX.2-dev.

The main validation prompt used during training was:

bm82 man

Validation settings

  • CFG: 7.5
  • CFG Rescale: 0.0
  • Steps: 20
  • Sampler: FlowMatchEulerDiscreteScheduler
  • Seed: None
  • Resolution: 1024x1024

Note: The validation settings are not necessarily the same as the training settings.

The text encoder was not trained. You may reuse the base model text encoder for inference.

Training settings

  • Training epochs: 266

  • Training steps: 1600

  • Learning rate: 0.0001

    • Learning rate schedule: constant_with_warmup
    • Warmup steps: 0
  • Max grad value: 2.0

  • Effective batch size: 2

    • Micro-batch size: 2
    • Gradient accumulation steps: 1
    • Number of GPUs: 1
  • Gradient checkpointing: True

  • Prediction type: flow_matching[]

  • Optimizer: adamw_bf16

  • Trainable parameter precision: Pure BF16

  • Base model precision: no_change

  • Caption dropout probability: 0.1%

  • LoRA Rank: 16

  • LoRA Alpha: 16.0

  • LoRA Dropout: 0.1

  • LoRA initialisation style: default

  • LoRA mode: Standard

Datasets

training-images

  • Repeats: 0
  • Total number of images: 12
  • Total number of aspect buckets: 1
  • Resolution: 1.048576 megapixels
  • Cropped: False
  • Crop style: None
  • Crop aspect: None
  • Used for regularisation data: No

Inference

import torch
from diffusers import DiffusionPipeline

model_id = 'black-forest-labs/FLUX.2-dev'
adapter_id = 'quzo/fl2'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)

prompt = "bm82 man"
negative_prompt = 'blurry, cropped, ugly'

## Optional: quantise the model to save on vram.
## Note: The model was not quantised during training, so it is not necessary to quantise it during inference time.
#from optimum.quanto import quantize, freeze, qint8
#quantize(pipeline.transformer, weights=qint8)
#freeze(pipeline.transformer)
    
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
model_output = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_inference_steps=20,
    generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
    width=1024,
    height=1024,
    guidance_scale=7.5,
).images[0]

model_output.save("output.png", format="PNG")