Commit ·
cf18787
0
Parent(s):
Initial commit
Browse files- .gitattributes +41 -0
- README.md +78 -0
- T-pro-it-2.1-Q4_K_M.gguf +3 -0
- T-pro-it-2.1-Q5_0.gguf +3 -0
- T-pro-it-2.1-Q5_K_M.gguf +3 -0
- T-pro-it-2.1-Q5_K_S.gguf +3 -0
- T-pro-it-2.1-Q6_K.gguf +3 -0
- T-pro-it-2.1-Q8_0.gguf +3 -0
.gitattributes
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
T-pro-it-2.1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
T-pro-it-2.1-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
T-pro-it-2.1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
T-pro-it-2.1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
T-pro-it-2.1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
T-pro-it-2.1-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
base_model: t-tech/T-pro-it-2.1
|
| 5 |
+
tags:
|
| 6 |
+
- llama-cpp
|
| 7 |
+
- gguf
|
| 8 |
+
license: apache-2.0
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# T-pro-it-2.1-GGUF
|
| 12 |
+
|
| 13 |
+
**🚨 Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.**
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
This repository contains **T-pro-it-2.1** converted to the **GGUF** format with
|
| 17 |
+
[llama.cpp](https://github.com/ggerganov/llama.cpp).
|
| 18 |
+
See the original BF16 model here: [t-tech/T-pro-it-2.1](https://huggingface.co/t-tech/T-pro-it-2.1).
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
## Description
|
| 22 |
+
T-pro-it-2.1 — is an efficient russian model built upon the Qwen 3 model family with improved instruction following and tool-calling capabilities compared to [T-pro-it-2.0](https://huggingface.co/t-tech/T-pro-it-2.0).
|
| 23 |
+
Outperforms Qwen3-32B in tool calling scenarios, which is essential for agentic applications. Built for both general tasks and complex workflows.
|
| 24 |
+
|
| 25 |
+
**NOTE: This model supports only non-thinking mode and does not generate `<think></think>` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
## 📊 Benchmarks
|
| 29 |
+
|
| 30 |
+
| | Ru Arena Hard | ruIFeval* | ruBFCL |
|
| 31 |
+
|---------------------|---------------|----------|--------|
|
| 32 |
+
| T-pro-it-2.1 | 93.8 | 80.7 | 66.0 |
|
| 33 |
+
| T-pro-it-2.1-Q8_0 | 94.2 | 80.8 | 65.8 |
|
| 34 |
+
| T-pro-it-2.1-Q6_K | 93.4 | 80.0 | 65.9 |
|
| 35 |
+
| T-pro-it-2.1-Q5_K_M | 92.7 | 81.4 | 65.7 |
|
| 36 |
+
| T-pro-it-2.1-Q5_K_S | 92.3 | 80.4 | 65.2 |
|
| 37 |
+
| T-pro-it-2.1-Q5_0 | 93.8 | 79.9 | 64.8 |
|
| 38 |
+
| T-pro-it-2.1-Q4_K_M | 92.6 | 80.7 | 64.8 |
|
| 39 |
+
|
| 40 |
+
\* IFeval metric is mean of 4 values: prompt and instruct levels for strict and loose accuracy.
|
| 41 |
+
|
| 42 |
+
> **Recommendation:** choose the **highest-quality quantisation that fits your hardware** (VRAM / RAM).
|
| 43 |
+
|
| 44 |
+
| Filename (→ `-gguf`) | Quant method | Bits | Size (GB) |
|
| 45 |
+
|----------------------|--------------|------|-----------|
|
| 46 |
+
| `T-pro-it-2.1-q8_0` | Q8_0 | 8 | 34.8 |
|
| 47 |
+
| `T-pro-it-2.1-q6_k` | Q6_K | 6 | 26.9 |
|
| 48 |
+
| `T-pro-it-2.1-q5_k_m` | Q5_K_M | 5 | 23.2 |
|
| 49 |
+
| `T-pro-it-2.1-q5_k_s` | Q5_K_S | 5 | 22.6 |
|
| 50 |
+
| `T-pro-it-2.1-q5_0` | Q5_0 | 5 | 22.6 |
|
| 51 |
+
| `T-pro-it-2.1-q4_k_m` | Q4_K_M | 4 | 19.8 |
|
| 52 |
+
|
| 53 |
+
*Size figures assume **no GPU off-loading**. Off-loading lowers RAM usage and uses VRAM instead.*
|
| 54 |
+
|
| 55 |
+
## Quickstart
|
| 56 |
+
|
| 57 |
+
### llama.cpp
|
| 58 |
+
|
| 59 |
+
Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide.
|
| 60 |
+
|
| 61 |
+
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
|
| 62 |
+
In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
|
| 63 |
+
|
| 64 |
+
```shell
|
| 65 |
+
./llama-cli -hf t-tech/T-pro-it-2.1-GGUF:Q8_0 --jinja --color -ngl 99 -fa -sm row --temp 0.6 --presence-penalty 1.0 -c 40960 -n 32768 --no-context-shift
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
### ollama
|
| 69 |
+
|
| 70 |
+
Check out our [ollama documentation](https://qwen.readthedocs.io/en/latest/run_locally/ollama.html) for more usage guide.
|
| 71 |
+
|
| 72 |
+
You can run T-pro-2.1 with one command:
|
| 73 |
+
|
| 74 |
+
```shell
|
| 75 |
+
ollama run t-tech/T-pro-it-2.1:q8_0
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
See also [t-tech ollama homepage](https://ollama.com/t-tech/T-pro-it-2.1).
|
T-pro-it-2.1-Q4_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b19e88cc1154b4b0c99f09bc7b57b8e5f3a6af4135581a3bfd31a2518ed58714
|
| 3 |
+
size 19761766048
|
T-pro-it-2.1-Q5_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:06ba111467a7c0bd1a1b71b6e749566b87e658a6fcf6d5edf5ec0181c6add62f
|
| 3 |
+
size 22634951968
|
T-pro-it-2.1-Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5c93325a18deb28e9842c877fdb7f453e63829bae57b0603157fdc82afbfd659
|
| 3 |
+
size 23214290208
|
T-pro-it-2.1-Q5_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:edf7a7c65e55d2963093074d108dea86109f05b659bf0ae01e809fb28fb20335
|
| 3 |
+
size 22634951968
|
T-pro-it-2.1-Q6_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4e6d7215efe7c0c9f62a8c4f35e11987c41301459d1888225f96b0e0c626ffb1
|
| 3 |
+
size 26882597152
|
T-pro-it-2.1-Q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:af2b3a761473a95ca31ab9b8647b9dc4288604a4a71f795ffd9402d2938c7621
|
| 3 |
+
size 34816397344
|