flux_krea_dev / README.md
jordigonzm's picture
Update README.md
d1d90a9 verified
---
language:
- en
license: other
library_name: transformers
---
**Flux assets for ComfyUI**
This repository contains **reference weights** arranged for ComfyUI under:
```
models/
diffusion_models/
vae/
text_encoders/
```
When used with Hugging Face **Inference Endpoints**, these files are baked into the endpoint image and available at:
```
/repository/models/...
```
Configure ComfyUI (`extra_model_paths.yaml`) to include the paths above.
> **Important:** Check the **original model licenses and terms** (Black Forest Labs / Comfy-Org / comfyanonymous repos). Some weights **do not permit commercial or production use**. This repository is provided strictly for **demo/testing** purposes; you are responsible for ensuring compliance before any other use.
**Contents:**
* `models/diffusion_models/` — Flux checkpoint (e.g., `flux1-krea-dev_fp8_scaled.safetensors`)
* `models/vae/` — VAE (`ae.safetensors`)
* `models/text_encoders/` — CLIP/T5 encoders
If you need a different layout, adjust `extra_model_paths.yaml` accordingly.
**Hugging Face Inference Endpoints**
- set the health route to `/`.
- set COMFY_FLAGS
**Docker (explicit flags):**
```bash
docker run --rm --gpus all -p 8080:80 \
-e COMFY_FLAGS="--normalvram --use-pytorch-cross-attention --cache-lru 64 --reserve-vram 1.5" \
your-image:tag
```
**Recommended `COMFY_FLAGS`**
These presets prioritize stability. They’re meant as safe defaults; you should still **set `COMFY_FLAGS` explicitly** for your deployment when you know your workload.
`--disable-xformers --use-pytorch-cross-attention --cache-lru 2 --reserve-vram 2.0`
**Minimal recommended `hardware`**
- Nvidia L4: 8 vCPU, 32 GiB RAM, disk 40 GiB