ubergarm commited on
Commit
7c82aeb
·
1 Parent(s): b7eb686

Note ik_llama.cpp can run your existing GGUFs too

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -15,6 +15,8 @@ tags:
15
  ## `ik_llama.cpp` imatrix Quantizations of tngtech/DeepSeek-R1T-Chimera
16
  This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support advanced non-linear SotA quants. Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
17
 
 
 
18
  These quants provide best in class quality for the given memory footprint.
19
 
20
  ## Big Thanks
 
15
  ## `ik_llama.cpp` imatrix Quantizations of tngtech/DeepSeek-R1T-Chimera
16
  This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support advanced non-linear SotA quants. Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
17
 
18
+ *NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
19
+
20
  These quants provide best in class quality for the given memory footprint.
21
 
22
  ## Big Thanks