File size: 2,300 Bytes
ba7d9e4
29f5a66
 
ba7d9e4
 
 
 
d9c374c
ba7d9e4
 
 
d9c374c
 
 
 
 
 
 
 
e9aa739
ba7d9e4
e9aa739
 
 
 
8951ca6
d9c374c
 
ba7d9e4
 
d9c374c
 
 
 
 
 
 
 
 
 
 
 
ba7d9e4
d9c374c
 
2e0fc65
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
base_model:
- N-Bot-Int/MistThena7B
tags:
- text-generation-inference
- transformers
- mistral
- rp
- gguf
language:
- en
license: apache-2.0
datasets:
- N-Bot-Int/Iris-Uncensored-R1
- N-Bot-Int/Moshpit-Combined-R2-Uncensored
- N-Bot-Int/Mushed-Dataset-Uncensored
- N-Bot-Int/Muncher-R1-Uncensored
- unalignment/toxic-dpo-v0.1
library_name: transformers
new_version: N-Bot-Int/MistThena7BV2-GGUF
---
# Support Us Through
  - [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/J3J61D8NHV)
  - [https://ko-fi.com/nexusnetworkint](Official Ko-FI link!)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/s4NcmxJ2pyBpDeULdYayv.png)
# GGUF Version
  **GGUF** with Quants! Allowing you to run models using KoboldCPP and other AI Environments!


# Quantizations:
| Quant Type    | Benefits                                          | Cons                                              |
|---------------|---------------------------------------------------|---------------------------------------------------|
| **Q4_K_M**    | βœ… Smallest size (fastest inference)              | ❌ Lowest accuracy compared to other quants      |
|               | βœ… Requires the least VRAM/RAM                    | ❌ May struggle with complex reasoning           |
|               | βœ… Ideal for edge devices & low-resource setups   | ❌ Can produce slightly degraded text quality    |
| **Q5_K_M**    | βœ… Better accuracy than Q4, while still compact   | ❌ Slightly larger model size than Q4            |
|               | βœ… Good balance between speed and precision       | ❌ Needs a bit more VRAM than Q4                 |
|               | βœ… Works well on mid-range GPUs                   | ❌ Still not as accurate as higher-bit models    |
| **Q8_0**      | βœ… Highest accuracy (closest to full model)       | ❌ Requires significantly more VRAM/RAM          |
|               | βœ… Best for complex reasoning & detailed outputs  | ❌ Slower inference compared to Q4 & Q5          |
|               | βœ… Suitable for high-end GPUs & serious workloads | ❌ Larger file size (takes more storage)         |

# Model Details:
  Read the Model details on huggingface
  [Model Detail Here!](https://huggingface.co/N-Bot-Int/MistThena7B)