Text Generation
GGUF
Korean
TensorBlock
GGUF
conversational
File size: 6,870 Bytes
64fd38a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fbf3d30
 
 
 
 
 
 
64fd38a
 
 
 
 
 
 
764ddb7
0cf3cfc
 
 
bacf0f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0cf3cfc
 
 
 
 
bacf0f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0cf3cfc
64fd38a
 
764ddb7
64fd38a
 
 
 
 
 
 
 
 
 
 
 
 
764ddb7
 
 
 
 
 
 
 
 
 
 
 
64fd38a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
---
tags:
- text-generation
- TensorBlock
- GGUF
license: cc-by-nc-sa-4.0
language:
- ko
base_model: Edentns/DataVortexS-10.7B-v0.2
pipeline_tag: text-generation
datasets:
- beomi/KoAlpaca-v1.1a
- Edentns/Worktronics-FAQ
---

<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>

[![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)
[![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)
[![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)
[![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)


## Edentns/DataVortexS-10.7B-v0.2 - GGUF

This repo contains GGUF format model files for [Edentns/DataVortexS-10.7B-v0.2](https://huggingface.co/Edentns/DataVortexS-10.7B-v0.2).

The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).


## Our projects
<table border="1" cellspacing="0" cellpadding="10">
  <tr>
    <th colspan="2" style="font-size: 25px;">Forge</th>
  </tr>
  <tr>
    <th colspan="2">
      <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
    </th>
  </tr>
  <tr>
    <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
  </tr>
  <tr>
    <th colspan="2">
      <a href="https://github.com/TensorBlock/forge" target="_blank" style="
        display: inline-block;
        padding: 8px 16px;
        background-color: #FF7F50;
        color: white;
        text-decoration: none;
        border-radius: 6px;
        font-weight: bold;
        font-family: sans-serif;
      ">πŸš€ Try it now! πŸš€</a>
    </th>
  </tr>

  <tr>
    <th style="font-size: 25px;">Awesome MCP Servers</th>
    <th style="font-size: 25px;">TensorBlock Studio</th>
  </tr>
  <tr>
    <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
    <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
  </tr>
  <tr>
    <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
    <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
  </tr>
  <tr>
    <th>
      <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
        display: inline-block;
        padding: 8px 16px;
        background-color: #FF7F50;
        color: white;
        text-decoration: none;
        border-radius: 6px;
        font-weight: bold;
        font-family: sans-serif;
      ">πŸ‘€ See what we built πŸ‘€</a>
    </th>
    <th>
      <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
        display: inline-block;
        padding: 8px 16px;
        background-color: #FF7F50;
        color: white;
        text-decoration: none;
        border-radius: 6px;
        font-weight: bold;
        font-family: sans-serif;
      ">πŸ‘€ See what we built πŸ‘€</a>
    </th>
  </tr>
</table>
## Prompt template


```
{system_prompt}

### Instruction:
{prompt}

### Response:
```

## Model file specification

| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [DataVortexS-10.7B-v0.2-Q2_K.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q2_K.gguf) | Q2_K | 3.799 GB | smallest, significant quality loss - not recommended for most purposes |
| [DataVortexS-10.7B-v0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q3_K_S.gguf) | Q3_K_S | 4.421 GB | very small, high quality loss |
| [DataVortexS-10.7B-v0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q3_K_M.gguf) | Q3_K_M | 4.916 GB | very small, high quality loss |
| [DataVortexS-10.7B-v0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q3_K_L.gguf) | Q3_K_L | 5.339 GB | small, substantial quality loss |
| [DataVortexS-10.7B-v0.2-Q4_0.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q4_0.gguf) | Q4_0 | 5.740 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [DataVortexS-10.7B-v0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q4_K_S.gguf) | Q4_K_S | 5.783 GB | small, greater quality loss |
| [DataVortexS-10.7B-v0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q4_K_M.gguf) | Q4_K_M | 6.103 GB | medium, balanced quality - recommended |
| [DataVortexS-10.7B-v0.2-Q5_0.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q5_0.gguf) | Q5_0 | 6.982 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [DataVortexS-10.7B-v0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q5_K_S.gguf) | Q5_K_S | 6.982 GB | large, low quality loss - recommended |
| [DataVortexS-10.7B-v0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q5_K_M.gguf) | Q5_K_M | 7.169 GB | large, very low quality loss - recommended |
| [DataVortexS-10.7B-v0.2-Q6_K.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q6_K.gguf) | Q6_K | 8.301 GB | very large, extremely low quality loss |
| [DataVortexS-10.7B-v0.2-Q8_0.gguf](https://huggingface.co/tensorblock/DataVortexS-10.7B-v0.2-GGUF/blob/main/DataVortexS-10.7B-v0.2-Q8_0.gguf) | Q8_0 | 10.751 GB | very large, extremely low quality loss - not recommended |


## Downloading instruction

### Command line

Firstly, install Huggingface Client

```shell
pip install -U "huggingface_hub[cli]"
```

Then, downoad the individual model file the a local directory

```shell
huggingface-cli download tensorblock/DataVortexS-10.7B-v0.2-GGUF --include "DataVortexS-10.7B-v0.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```

If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:

```shell
huggingface-cli download tensorblock/DataVortexS-10.7B-v0.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```