Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Setup
|
| 2 |
+
|
| 3 |
+
Install the latest version of `diffusers`
|
| 4 |
+
|
| 5 |
+
```shell
|
| 6 |
+
pip install git+https://github.com/huggingface/diffusers.git
|
| 7 |
+
```
|
| 8 |
+
|
| 9 |
+
Login to your Hugging Face account
|
| 10 |
+
|
| 11 |
+
```shell
|
| 12 |
+
hf auth login
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
## How to use
|
| 16 |
+
|
| 17 |
+
The following code snippet demonstrates how to use the [Flux2](https://huggingface.co/black-forest-labs/FLUX.2-dev) modular pipeline with a remote text encoder and a 4bit quantized version of the DiT. It requires approximately 19GB of VRAM to generate an image.
|
| 18 |
+
|
| 19 |
+
```python
|
| 20 |
+
import torch
|
| 21 |
+
from diffusers.modular_pipelines.flux2 import ALL_BLOCKS
|
| 22 |
+
from diffusers.modular_pipelines import SequentialPipelineBlocks
|
| 23 |
+
|
| 24 |
+
blocks = SequentialPipelineBlocks.from_blocks_dict(ALL_BLOCKS['remote'])
|
| 25 |
+
pipe = blocks.init_pipeline("diffusers/flux2-bnb-4bit-modular")
|
| 26 |
+
pipe.load_components(torch_dtype=torch.bfloat16, device_map="cuda")
|
| 27 |
+
|
| 28 |
+
prompt = "a photo of a cat"
|
| 29 |
+
outputs = pipe(prompt=prompt, num_inference_steps=28, output="images")
|
| 30 |
+
outputs[0].save("flux2-bnb-modular.png")
|
| 31 |
+
```
|