Eviation commited on
Commit
091642e
·
verified ·
1 Parent(s): f103f99

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -19,8 +19,9 @@ Various quantizations for [hf:ostris/Flex.2-preview](https://huggingface.co/ostr
19
 
20
  | Filename | Quant Type | File Size | Description | Example Image |
21
  | -------- | ---------- | --------- | ----------- | ------------- |
22
- | [Flex.2-preview-fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Eviation/Flex.2-preview/blob/main/Flex.2-preview-fp8_e4m3fn_scaled.safetensors) | FP8 E4M3FN | 8.17GB | - | - |
23
- | [Flex.2-preview-fp8_e5m2_scaled.safetensors](https://huggingface.co/Eviation/Flex.2-preview/blob/main/Flex.2-preview-fp8_e5m2_scaled.safetensors) | FP8 E5M2 | 8.17GB | - | - |
 
24
 
25
 
26
  # Pure GGUF
 
19
 
20
  | Filename | Quant Type | File Size | Description | Example Image |
21
  | -------- | ---------- | --------- | ----------- | ------------- |
22
+ | [Flex.2-preview-fp8_e4m3fnsafetensors](https://huggingface.co/Eviation/Flex.2-preview/blob/main/Flex.2-preview-fp8_e4m3fn.safetensors) | FP8 E4M3FN | 8.17GB | - | - |
23
+ | [Flex.2-preview-fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Eviation/Flex.2-preview/blob/main/Flex.2-preview-fp8_e4m3fn_scaled.safetensors) | F8_E4M3FN | 8.17GB | Scale per weight tensor | - |
24
+ | [Flex.2-preview-fp8_e5m2_scaled.safetensors](https://huggingface.co/Eviation/Flex.2-preview/blob/main/Flex.2-preview-fp8_e5m2_scaled.safetensors) | F8_E5M2 | 8.17GB | Scale per weight tensor | - |
25
 
26
 
27
  # Pure GGUF