Instructions to use hf-internal-testing/flux.1-dev-nf4-pkg with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use hf-internal-testing/flux.1-dev-nf4-pkg with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("hf-internal-testing/flux.1-dev-nf4-pkg", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Running Flux.1-dev under 12GBs
This repository contains the NF4 params for the T5 and transformer of Flux.1-Dev. Check out this Colab Notebook for details on how they were obtained.
Check out this notebook that shows how to use the checkpoints and run in a free-tier Colab Notebook.
Respective diffusers PR: https://github.com/huggingface/diffusers/pull/9213/.
The checkpoints of this repository were optimized to run on a T4 notebook. More specifically, the compute datatype of the quantized checkpoints was kept to FP16. In practice, if you have a GPU card that supports BF16, you should change the compute datatype to BF16 (
bnb_4bit_compute_dtype).
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support