Model Overview

Silan10/flux-quanto-int8 is an 8-bit quantized version of the black-forest-labs/FLUX.1-dev text-to-image model. In this version, the transformer component has been quantized to 8-bit precision using optimum-quanto.

Optimum-quanto quantization uses 8-bit integer representation with optimized kernels. This provides substantial memory savings while maintaining high image quality.

Usage

This model requires a custom loader class. Download quantized_flux.py from this repo:

import torch
from diffusers import FluxPipeline
from huggingface_hub import hf_hub_download
import importlib.util

REPO_ID = "Silan10/flux-quanto-int8"
FLUX_MODEL_PATH = "black-forest-labs/FLUX.1-dev"  # Or local path

# Download and import QuantizedFluxTransformer2DModel
quantized_flux_path = hf_hub_download(repo_id=REPO_ID, filename="quantized_flux.py")
spec = importlib.util.spec_from_file_location("quantized_flux", quantized_flux_path)
quantized_flux = importlib.util.module_from_spec(spec)
spec.loader.exec_module(quantized_flux)
QuantizedFluxTransformer2DModel = quantized_flux.QuantizedFluxTransformer2DModel

# Load quantized transformer
print("Loading quantized transformer...")
transformer = QuantizedFluxTransformer2DModel.from_pretrained(REPO_ID)
transformer.to(device="cuda")

# Load rest of pipeline
print("Loading pipeline...")
pipe = FluxPipeline.from_pretrained(
    FLUX_MODEL_PATH,
    transformer=None,
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")

pipe.transformer = transformer
pipe.vae.to(torch.float32)
print("✓ Pipeline ready.")

prompt = "Ultra-detailed nighttime cyberpunk city street, several pedestrians in modern clothes, one person in the foreground looking toward the camera, sharp facial features and detailed hair, wet pavement reflecting colorful neon signs, shop windows with small readable text on signs, a gradient sky fading from deep blue to purple, a mix of strong highlights and deep shadows, highly detailed, 4K, cinematic lighting."
print("Generating image...")

image = pipe(
    prompt,
    num_inference_steps=20,
    guidance_scale=3.5,
    max_sequence_length=512,
    width=1024,
    height=1024,
    generator=torch.Generator("cpu").manual_seed(42)
).images[0]

image.save("output_quanto_int8.png")
print("✓ Image generated successfully.")
print("DONE!")

Credits

Downloads last month
32
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Silan10/flux-quanto-int8

Finetuned
(546)
this model

Collection including Silan10/flux-quanto-int8