Update README.md
Browse files
README.md
CHANGED
|
@@ -22,4 +22,8 @@ Q_6_K is good also; if you want a better results; take this one instead of Q_5_K
|
|
| 22 |
|
| 23 |
Q_8_0 which is very good; need a reasonable size of RAM otherwise you might expect a long wait
|
| 24 |
|
| 25 |
-
16-bit and 32-bit are not provided here; since the size is similar to the original safetensors; once you have a GPU, go ahead with the saftetensors, pretty much the same
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
Q_8_0 which is very good; need a reasonable size of RAM otherwise you might expect a long wait
|
| 24 |
|
| 25 |
+
16-bit and 32-bit are not provided here; since the size is similar to the original safetensors; once you have a GPU, go ahead with the saftetensors, pretty much the same
|
| 26 |
+
|
| 27 |
+
### how to run it?
|
| 28 |
+
|
| 29 |
+
use any connector for interacting with gguf; i.e., [gguf-connector](https://pypi.org/project/gguf-connector/)
|