Eviation commited on
Commit
547ccfc
·
verified ·
1 Parent(s): f768145

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -14,6 +14,7 @@ base_model:
14
 
15
  - pure, conversion from safetensors BF16 via F32 gguf
16
  - architecture: flex.2 (as not all tensor shapes match to flux)
 
17
  - biases and norms: F32
18
  - img_in.weight: BF16 (due to tensor shape and block sizes)
19
  - everything else according to file type
@@ -23,6 +24,7 @@ base_model:
23
  | [Flex.2-preview-BF16.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/Flex.2-preview-BF16.gguf) | BF16 | 16.3GB | - | - |
24
  | [Flex.2-preview-Q8_0.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/pure/Flex.2-preview-Q8_0.gguf) | Q8_0 | 8.68GB | TBC | - |
25
  | [Flex.2-preview-Q6_K.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/pure/Flex.2-preview-Q6_K.gguf) | Q6_K | 6.70GB | TBC | - |
 
26
 
27
 
28
  # Fluxified
 
14
 
15
  - pure, conversion from safetensors BF16 via F32 gguf
16
  - architecture: flex.2 (as not all tensor shapes match to flux)
17
+ - no imatrix was used to quantize
18
  - biases and norms: F32
19
  - img_in.weight: BF16 (due to tensor shape and block sizes)
20
  - everything else according to file type
 
24
  | [Flex.2-preview-BF16.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/Flex.2-preview-BF16.gguf) | BF16 | 16.3GB | - | - |
25
  | [Flex.2-preview-Q8_0.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/pure/Flex.2-preview-Q8_0.gguf) | Q8_0 | 8.68GB | TBC | - |
26
  | [Flex.2-preview-Q6_K.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/pure/Flex.2-preview-Q6_K.gguf) | Q6_K | 6.70GB | TBC | - |
27
+ | [Flex.2-preview-Q5_1.gguf](https://huggingface.co/Eviation/Flex.2-preview/blob/main/pure/Flex.2-preview-Q5_1.gguf) | Q5_1 | 6.13GB | TBC | - |
28
 
29
 
30
  # Fluxified