Update README.md
Browse files
README.md
CHANGED
|
@@ -25,10 +25,8 @@ Converted to GGUF format for running it on Ollama/Llama.cpp so as to take advant
|
|
| 25 |
Merged base mistral_v0.1_instruct with Qlora and quantised to Q4_k_s gguf format
|
| 26 |
<br>You may find the base 16 bit model here (but further quantisation is advisable as their Qlora module was fine tuned on the 4bit nf4 base llm)
|
| 27 |
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
**Developed by:** Nuode Chen[https://github.com/ChenNuode]
|
| 31 |
-
**Finetuned from model:** Mistral_V0.1_instruct[https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1]
|
| 32 |
<br>
|
| 33 |
|
| 34 |
### Model Sources
|
|
|
|
| 25 |
Merged base mistral_v0.1_instruct with Qlora and quantised to Q4_k_s gguf format
|
| 26 |
<br>You may find the base 16 bit model here (but further quantisation is advisable as their Qlora module was fine tuned on the 4bit nf4 base llm)
|
| 27 |
|
| 28 |
+
**Developed by:** [Nuode Chen](https://github.com/ChenNuode)
|
| 29 |
+
<br>**Finetuned from model:** [Mistral_V0.1_instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
|
|
|
|
|
|
|
| 30 |
<br>
|
| 31 |
|
| 32 |
### Model Sources
|