Transformers
GGUF
English
mistral
text-generation-inference
rp
conversational
File size: 2,145 Bytes
e6a715f
5b74f78
1a589b4
e6a715f
 
 
 
5b74f78
e6a715f
 
 
5b74f78
 
 
 
 
e6a715f
1f2811b
 
 
 
fe595f8
5b74f78
 
e6a715f
 
5b74f78
 
 
 
 
 
 
 
 
 
 
 
e6a715f
5b74f78
 
9d827ea
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
base_model:
- N-Bot-Int/MistThena7B-V2
tags:
- text-generation-inference
- transformers
- mistral
- rp
- gguf
language:
- en
license: apache-2.0
datasets:
- N-Bot-Int/Iris-Uncensored-R2
- N-Bot-Int/Millie-R1_DPO
- N-Bot-Int/Millia-R1_DPO
---
# Support Us Through
  - [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/J3J61D8NHV)
  - [https://ko-fi.com/nexusnetworkint](Official Ko-FI link!)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/Ks9_kWvjksA3bG0sHe4Of.png)
# GGUF Version
  **GGUF** with Quants! Allowing you to run models using KoboldCPP and other AI Environments!


# Quantizations:
| Quant Type    | Benefits                                          | Cons                                              |
|---------------|---------------------------------------------------|---------------------------------------------------|
| **Q4_K_M**    | βœ… Smallest size (fastest inference)              | ❌ Lowest accuracy compared to other quants      |
|               | βœ… Requires the least VRAM/RAM                    | ❌ May struggle with complex reasoning           |
|               | βœ… Ideal for edge devices & low-resource setups   | ❌ Can produce slightly degraded text quality    |
| **Q5_K_M**    | βœ… Better accuracy than Q4, while still compact   | ❌ Slightly larger model size than Q4            |
|               | βœ… Good balance between speed and precision       | ❌ Needs a bit more VRAM than Q4                 |
|               | βœ… Works well on mid-range GPUs                   | ❌ Still not as accurate as higher-bit models    |
| **Q8_0**      | βœ… Highest accuracy (closest to full model)       | ❌ Requires significantly more VRAM/RAM          |
|               | βœ… Best for complex reasoning & detailed outputs  | ❌ Slower inference compared to Q4 & Q5          |
|               | βœ… Suitable for high-end GPUs & serious workloads | ❌ Larger file size (takes more storage)         |

# Model Details:
  Read the Model details on huggingface
  [Model Detail Here!](https://huggingface.co/N-Bot-Int/MistThena7B-V2)