File size: 1,501 Bytes
fa9900d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: mit
language:
- en
- vi
base_model:
- 5CD-AI/Vintern-3B-R-beta
pipeline_tag: image-text-to-text
library_name: transformers
---

GGUF and static quants of https://huggingface.co/5CD-AI/Vintern-3B-R-beta

## Usage

If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.

## Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](./mmproj-f16.gguf) | mmproj-fp16 | 1.5 | vision supplement |
| [GGUF](./Vintern-3B-beta-Q2_K.gguf) | Q2_K | 3.1 |  |
| [GGUF](./Vintern-3B-beta-Q3_K_S.gguf) | Q3_K_S | 3.6 |  |
| [GGUF](./Vintern-3B-beta-Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](./Vintern-3B-beta-Q3_K_L.gguf) | Q3_K_L | 4.2 |  |
| [GGUF](./Vintern-3B-beta-IQ4_XS.gguf) | IQ4_XS | 4.4 |  |
| [GGUF](./Vintern-3B-beta-Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](./Vintern-3B-beta-Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](./Vintern-3B-beta-Q5_K_S.gguf) | Q5_K_S | 5.4 |  |
| [GGUF](./Vintern-3B-beta-Q5_K_M.gguf) | Q5_K_M | 5.5 |  |
| [GGUF](./Vintern-3B-beta-Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](./Vintern-3B-beta-Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](./Vintern-3B-beta-f16.gguf) | f16 | 15.3 | 16 bpw, overkill |