author: nucleardiffusion
baseModel: SDXL 1.0
hashes:
AutoV1: 584F63AB
AutoV2: 235745AF8D
AutoV3: 55F20A1016E7
BLAKE3: D46EB6988A26BCAF9E9CFA5E5C6264C4EE1A70F2018F33B8BC2DD7CA0681B490
CRC32: 5BB75FB4
SHA256: 235745AF8D86BF4A4C1B5B4F529868B37019A10F7C0B2E79AD0ABCA3A22BC6E1
metadata:
format: SafeTensor
modelPage: https://civitai.com/models/140686?modelVersionId=155933
preview:
- >-
https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/4596b7dc-4fd4-494c-bf8c-36d6a0ee89bc/width=450/2369135.jpeg
website: Civitai
Trigger Words
No trigger words
About this version
No description about this version
FIX FP16 Errors SDXL - Lower Memory use! --- sdxl-vae-fp16-fix by madebyollin
"As good as SDXL VAE but runs twice as fast and uses significantly less memory." https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/discussions/7
"Same license on stable-diffusion-xl-base-1.0
same vae license on sdxl-vae-fp16-fix
Troubleshoot:
Do not use the refiner with VAE built in
Try launch param: --medvram --opt-split-attention --xformers
SDXL-VAE-FP16-Fix is the [SDXL VAE](https://huggingface.co/stabilityai/sdxl-vae, but modified to run in fp16 precision without generating NaNs.
Details:
SDXL-VAE generates NaNs in fp16 because the internal activation values are too big:
SDXL-VAE-FP16-Fix was created by finetuning the SDXL-VAE to:
- keep the final output the same, but
- make the internal activation values smaller, by
- scaling down weights and biases within the network
There are slight discrepancies between the output of SDXL-VAE-FP16-Fix and SDXL-VAE, but the decoded images should be close enough for most purposes." - bdsqlsz
NOT MY WORK - REUPLOADED HERE FOR EASE OF USE