Eviation commited on
Commit
f70680d
·
verified ·
1 Parent(s): 7ae1207

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -16,9 +16,10 @@ base_model:
16
  | Filename | Quant Type | File Size | Description | Example Image |
17
  | -------- | ---------- | --------- | ----------------------------- | ------------- |
18
  | [Flex.2-preview-fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Eviation/Flex.2-preview/blob/main/Flex.2-preview-fp8_e4m3fn_scaled.safetensors) | FP8 E4M3FN | 8.17GB | - | - |
 
19
 
20
 
21
- # Pure
22
 
23
  - pure, conversion from safetensors BF16 via F32 gguf
24
  - architecture: flex.2 (as not all tensor shapes match to flux)
@@ -40,15 +41,16 @@ base_model:
40
  | [Flex.2-preview-Q3_K_S.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/pure/Flex.2-preview-Q3_K_S.gguf) | Q3_K_S | 3.52GB | TBC | - |
41
 
42
 
43
- # Fluxified
44
 
45
  - conversion from safetensors BF16 via F32 gguf
46
  - truncated img_in.weight tensor to first 16 latent channels
47
  - lost ability to do inpainting and process control image
48
- - drop-in replacement for FLUX
49
  - architecture: flux
50
  - dynamic quantization?
51
 
52
  | Filename | Quant type | File Size | Description / L2 Loss Step 25 | Example Image |
53
  | -------- | ---------- | --------- | ----------------------------- | ------------- |
54
- | [Flex.2-preview-fluxified-Q8_0.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/fluxified/Flex.2-preview-fluxified-Q8_0.gguf) | Q8_0 | 8.39GB | TBC | - |
 
 
16
  | Filename | Quant Type | File Size | Description | Example Image |
17
  | -------- | ---------- | --------- | ----------------------------- | ------------- |
18
  | [Flex.2-preview-fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Eviation/Flex.2-preview/blob/main/Flex.2-preview-fp8_e4m3fn_scaled.safetensors) | FP8 E4M3FN | 8.17GB | - | - |
19
+ | [Flex.2-preview-fp8_e5m2_scaled.safetensors](https://huggingface.co/Eviation/Flex.2-preview/blob/main/Flex.2-preview-fp8_e5m2_scaled.safetensors) | FP8 E4M3FN | 8.17GB | - | - |
20
 
21
 
22
+ # Pure GGUF
23
 
24
  - pure, conversion from safetensors BF16 via F32 gguf
25
  - architecture: flex.2 (as not all tensor shapes match to flux)
 
41
  | [Flex.2-preview-Q3_K_S.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/pure/Flex.2-preview-Q3_K_S.gguf) | Q3_K_S | 3.52GB | TBC | - |
42
 
43
 
44
+ # Fluxified GGUF
45
 
46
  - conversion from safetensors BF16 via F32 gguf
47
  - truncated img_in.weight tensor to first 16 latent channels
48
  - lost ability to do inpainting and process control image
49
+ - should be a drop-in replacement for FLUX
50
  - architecture: flux
51
  - dynamic quantization?
52
 
53
  | Filename | Quant type | File Size | Description / L2 Loss Step 25 | Example Image |
54
  | -------- | ---------- | --------- | ----------------------------- | ------------- |
55
+ | [Flex.2-preview-fluxified-Q8_0.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/fluxified/Flex.2-preview-fluxified-Q8_0.gguf) | Q8_0 | 8.39GB | TBC | - |
56
+ | [Flex.2-preview-fluxified-Q3_K_S.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/fluxified/Flex.2-preview-fluxified-Q3_K_S.gguf) | Q3_K_S | 3.52GB | TBC | - |