Diffusers
Safetensors
How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("hf-internal-testing/flux.1-dev-nf4-pkg", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Running Flux.1-dev under 12GBs

This repository contains the NF4 params for the T5 and transformer of Flux.1-Dev. Check out this Colab Notebook for details on how they were obtained.

Check out this notebook that shows how to use the checkpoints and run in a free-tier Colab Notebook.

Respective diffusers PR: https://github.com/huggingface/diffusers/pull/9213/.

The checkpoints of this repository were optimized to run on a T4 notebook. More specifically, the compute datatype of the quantized checkpoints was kept to FP16. In practice, if you have a GPU card that supports BF16, you should change the compute datatype to BF16 (bnb_4bit_compute_dtype).

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using hf-internal-testing/flux.1-dev-nf4-pkg 1