mradermacher commited on
Commit
a641fc5
·
verified ·
1 Parent(s): 439d1e3

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -39,15 +39,22 @@ more details, including on how to concatenate multi-part files.
39
  | Link | Type | Size/GB | Notes |
40
  |:-----|:-----|--------:|:------|
41
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
 
 
 
42
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
43
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
44
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
 
45
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
46
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
47
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
48
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
49
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
 
 
50
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
 
51
 
52
  Here is a handy graph by ikawrakow comparing some lower-quality quant
53
  types (lower is better):
 
39
  | Link | Type | Size/GB | Notes |
40
  |:-----|:-----|--------:|:------|
41
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
42
+ | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
43
+ | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
44
+ | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
45
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
46
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
47
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
48
+ | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
49
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
50
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
51
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
52
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
53
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
54
+ | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
55
+ | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
56
  | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
57
+ | [GGUF](https://huggingface.co/mradermacher/TableGPT-R1-i1-GGUF/resolve/main/TableGPT-R1.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
58
 
59
  Here is a handy graph by ikawrakow comparing some lower-quality quant
60
  types (lower is better):