Update README.md
Browse files
README.md
CHANGED
|
@@ -107,7 +107,7 @@ This size should allow for:
|
|
| 107 |
- Q8 or Q6 inference on 6GB VRAM
|
| 108 |
- Q5 inference on 4GB VRAM
|
| 109 |
- Fine-tuning on ... well, with less VRAM than an 8B model
|
| 110 |
-
|
| 111 |
And of course, as stated, it was a test of significant pruning, and of pruning&healing an instruct-tuned model. As a test, I think it's definitely successful.
|
| 112 |
|
| 113 |
## Mergekit Details
|
|
|
|
| 107 |
- Q8 or Q6 inference on 6GB VRAM
|
| 108 |
- Q5 inference on 4GB VRAM
|
| 109 |
- Fine-tuning on ... well, with less VRAM than an 8B model
|
| 110 |
+
|
| 111 |
And of course, as stated, it was a test of significant pruning, and of pruning&healing an instruct-tuned model. As a test, I think it's definitely successful.
|
| 112 |
|
| 113 |
## Mergekit Details
|