InsecureErasure commited on
Commit
8a4d564
·
verified ·
1 Parent(s): 50fae52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -1,10 +1,12 @@
1
  ---
2
  license: openrail++
3
- base_model: stabilityai/stable-diffusion-xl-base-1.0
 
 
4
  tags:
5
- - stable-diffusion-xl
6
- - text-to-image
7
- - gguf
8
  ---
9
 
10
  # CyberRealistic XL v9.0 — GGUF
@@ -53,7 +55,7 @@ All three images were generated using the same prompt, seed, and sampler setting
53
  |:---:|:---:|:---:|
54
  | <small>**Full checkpoint — F16**<br>Standard SDXL checkpoint with the VAE baked in at F16. Reference output.</small> | <small>**GGUF F16 + VAE extracted from checkpoint**<br>UNet in GGUF F16, VAE and CLIP extracted from the same checkpoint in F16. Pixel-perfect identical to the reference.</small> | <small>**GGUF F16 + [madebyollin VAE fp16-fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix) — F16**<br>Same as above but with madebyollin's VAE (originally FP32, converted to F16).</small> |
55
 
56
- The following table shows the difference between some of the K-quants available with `llama-quantize`.
57
 
58
  | ![Q8_0](images/Q8_0.png) | ![Q6_K](images/Q6_K.png) | ![Q4_K_M](images/Q4_K_M.png) |
59
  |:---:|:---:|:---:|
 
1
  ---
2
  license: openrail++
3
+ base_model:
4
+ - stabilityai/stable-diffusion-xl-base-1.0
5
+ - cyberdelia/CyberRealisticXL
6
  tags:
7
+ - stable-diffusion-xl
8
+ - text-to-image
9
+ - gguf
10
  ---
11
 
12
  # CyberRealistic XL v9.0 — GGUF
 
55
  |:---:|:---:|:---:|
56
  | <small>**Full checkpoint — F16**<br>Standard SDXL checkpoint with the VAE baked in at F16. Reference output.</small> | <small>**GGUF F16 + VAE extracted from checkpoint**<br>UNet in GGUF F16, VAE and CLIP extracted from the same checkpoint in F16. Pixel-perfect identical to the reference.</small> | <small>**GGUF F16 + [madebyollin VAE fp16-fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix) — F16**<br>Same as above but with madebyollin's VAE (originally FP32, converted to F16).</small> |
57
 
58
+ The following table shows the difference between some of the K-quants available with `llama-quantize` from [llama.cpp](https://github.com/ggerganov/llama.cpp) using [city96's diffusion model patch](https://github.com/city96/ComfyUI-GGUF/blob/main/tools/README.md).
59
 
60
  | ![Q8_0](images/Q8_0.png) | ![Q6_K](images/Q6_K.png) | ![Q4_K_M](images/Q4_K_M.png) |
61
  |:---:|:---:|:---:|