morriszms commited on
Commit
182b3a2
·
verified ·
1 Parent(s): 3999713

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ SFT_nochat_FULL_DATA-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ SFT_nochat_FULL_DATA-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ SFT_nochat_FULL_DATA-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ SFT_nochat_FULL_DATA-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ SFT_nochat_FULL_DATA-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ SFT_nochat_FULL_DATA-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ SFT_nochat_FULL_DATA-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ SFT_nochat_FULL_DATA-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ SFT_nochat_FULL_DATA-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ SFT_nochat_FULL_DATA-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ SFT_nochat_FULL_DATA-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ SFT_nochat_FULL_DATA-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - unsloth
5
+ - trl
6
+ - sft
7
+ - TensorBlock
8
+ - GGUF
9
+ base_model: mmmanuel/SFT_nochat_FULL_DATA
10
+ ---
11
+
12
+ <div style="width: auto; margin-left: auto; margin-right: auto">
13
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
14
+ </div>
15
+
16
+ [![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)
17
+ [![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)
18
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)
19
+ [![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)
20
+ [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)
21
+
22
+
23
+ ## mmmanuel/SFT_nochat_FULL_DATA - GGUF
24
+
25
+ <div style="text-align: left; margin: 20px 0;">
26
+ <a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
27
+ Join our Discord to learn more about what we're building ↗
28
+ </a>
29
+ </div>
30
+
31
+ This repo contains GGUF format model files for [mmmanuel/SFT_nochat_FULL_DATA](https://huggingface.co/mmmanuel/SFT_nochat_FULL_DATA).
32
+
33
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
34
+
35
+ ## Our projects
36
+ <table border="1" cellspacing="0" cellpadding="10">
37
+ <tr>
38
+ <th colspan="2" style="font-size: 25px;">Forge</th>
39
+ </tr>
40
+ <tr>
41
+ <th colspan="2">
42
+ <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
43
+ </th>
44
+ </tr>
45
+ <tr>
46
+ <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
47
+ </tr>
48
+ <tr>
49
+ <th colspan="2">
50
+ <a href="https://github.com/TensorBlock/forge" target="_blank" style="
51
+ display: inline-block;
52
+ padding: 8px 16px;
53
+ background-color: #FF7F50;
54
+ color: white;
55
+ text-decoration: none;
56
+ border-radius: 6px;
57
+ font-weight: bold;
58
+ font-family: sans-serif;
59
+ ">🚀 Try it now! 🚀</a>
60
+ </th>
61
+ </tr>
62
+
63
+ <tr>
64
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
65
+ <th style="font-size: 25px;">TensorBlock Studio</th>
66
+ </tr>
67
+ <tr>
68
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
69
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
70
+ </tr>
71
+ <tr>
72
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
73
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
74
+ </tr>
75
+ <tr>
76
+ <th>
77
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
78
+ display: inline-block;
79
+ padding: 8px 16px;
80
+ background-color: #FF7F50;
81
+ color: white;
82
+ text-decoration: none;
83
+ border-radius: 6px;
84
+ font-weight: bold;
85
+ font-family: sans-serif;
86
+ ">👀 See what we built 👀</a>
87
+ </th>
88
+ <th>
89
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
90
+ display: inline-block;
91
+ padding: 8px 16px;
92
+ background-color: #FF7F50;
93
+ color: white;
94
+ text-decoration: none;
95
+ border-radius: 6px;
96
+ font-weight: bold;
97
+ font-family: sans-serif;
98
+ ">👀 See what we built 👀</a>
99
+ </th>
100
+ </tr>
101
+ </table>
102
+
103
+ ## Prompt template
104
+
105
+ ```
106
+ Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
107
+ ```
108
+
109
+ ## Model file specification
110
+
111
+ | Filename | Quant type | File Size | Description |
112
+ | -------- | ---------- | --------- | ----------- |
113
+ | [SFT_nochat_FULL_DATA-Q2_K.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q2_K.gguf) | Q2_K | 0.296 GB | smallest, significant quality loss - not recommended for most purposes |
114
+ | [SFT_nochat_FULL_DATA-Q3_K_S.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q3_K_S.gguf) | Q3_K_S | 0.323 GB | very small, high quality loss |
115
+ | [SFT_nochat_FULL_DATA-Q3_K_M.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q3_K_M.gguf) | Q3_K_M | 0.347 GB | very small, high quality loss |
116
+ | [SFT_nochat_FULL_DATA-Q3_K_L.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q3_K_L.gguf) | Q3_K_L | 0.368 GB | small, substantial quality loss |
117
+ | [SFT_nochat_FULL_DATA-Q4_0.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q4_0.gguf) | Q4_0 | 0.382 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
118
+ | [SFT_nochat_FULL_DATA-Q4_K_S.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q4_K_S.gguf) | Q4_K_S | 0.383 GB | small, greater quality loss |
119
+ | [SFT_nochat_FULL_DATA-Q4_K_M.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q4_K_M.gguf) | Q4_K_M | 0.397 GB | medium, balanced quality - recommended |
120
+ | [SFT_nochat_FULL_DATA-Q5_0.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q5_0.gguf) | Q5_0 | 0.437 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
121
+ | [SFT_nochat_FULL_DATA-Q5_K_S.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q5_K_S.gguf) | Q5_K_S | 0.437 GB | large, low quality loss - recommended |
122
+ | [SFT_nochat_FULL_DATA-Q5_K_M.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q5_K_M.gguf) | Q5_K_M | 0.444 GB | large, very low quality loss - recommended |
123
+ | [SFT_nochat_FULL_DATA-Q6_K.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q6_K.gguf) | Q6_K | 0.495 GB | very large, extremely low quality loss |
124
+ | [SFT_nochat_FULL_DATA-Q8_0.gguf](https://huggingface.co/tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF/blob/main/SFT_nochat_FULL_DATA-Q8_0.gguf) | Q8_0 | 0.639 GB | very large, extremely low quality loss - not recommended |
125
+
126
+
127
+ ## Downloading instruction
128
+
129
+ ### Command line
130
+
131
+ Firstly, install Huggingface Client
132
+
133
+ ```shell
134
+ pip install -U "huggingface_hub[cli]"
135
+ ```
136
+
137
+ Then, downoad the individual model file the a local directory
138
+
139
+ ```shell
140
+ huggingface-cli download tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF --include "SFT_nochat_FULL_DATA-Q2_K.gguf" --local-dir MY_LOCAL_DIR
141
+ ```
142
+
143
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
144
+
145
+ ```shell
146
+ huggingface-cli download tensorblock/mmmanuel_SFT_nochat_FULL_DATA-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
147
+ ```
SFT_nochat_FULL_DATA-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e8efe49cee4ffa9929ca61aa63edfe12397a9d7f3cad62e6e2ce734399638ee
3
+ size 296233536
SFT_nochat_FULL_DATA-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f77f120e56d640a146f442643c81f90e86bf91d76047e399ae135fa0dae89d6d
3
+ size 368486976
SFT_nochat_FULL_DATA-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb66f91f33efb491029098a536ce133752121b1bf863c233cd5987c85c3ba2b7
3
+ size 347122240
SFT_nochat_FULL_DATA-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef64160e2b8b77154d8f3a43e77307600ecd5d3e22bf77a45326c6045487a9f4
3
+ size 323070528
SFT_nochat_FULL_DATA-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bba7b1b89df0808739349c02a343cc37d63f80b22fb8da296f31e0eaf3bab069
3
+ size 381561408
SFT_nochat_FULL_DATA-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef0a2b37ef009a475be98f3f1faedd3de835d779ae2a694781b4303f197aca92
3
+ size 396700224
SFT_nochat_FULL_DATA-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b9066600efc6ecce2ea71f405c65fb0d0885f666d0970b9b95be921df9901a5
3
+ size 383265344
SFT_nochat_FULL_DATA-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86e54f8367bd18ee04814492d591519b98297c95f2087cfa81f239238c0a37ad
3
+ size 436611648
SFT_nochat_FULL_DATA-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41c81369428a73b2b7571c7db91c01bafd1554e09ada12703118a2e1c70706c3
3
+ size 444410432
SFT_nochat_FULL_DATA-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a672bb2b7b9b1bb369820bbc34cf867874584be04d11c9bb78c5858d762e7486
3
+ size 436611648
SFT_nochat_FULL_DATA-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebbf47a18051e86229f55711f04d55a5b2455dc553ae2597f0b6c229974e500c
3
+ size 495102528
SFT_nochat_FULL_DATA-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9477b431d1637a5c071b53e71c21f005fe0e77e988838ee336e42d463475417a
3
+ size 639442496