morriszms commited on
Commit
5ff59f4
·
verified ·
1 Parent(s): 8ce49a0

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ o80-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ o80-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ o80-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ o80-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ o80-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ o80-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ o80-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ o80-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ o80-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ o80-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ o80-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ o80-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - TensorBlock
5
+ - GGUF
6
+ base_model: impossibleexchange/o80
7
+ ---
8
+
9
+ <div style="width: auto; margin-left: auto; margin-right: auto">
10
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
11
+ </div>
12
+ <div style="display: flex; justify-content: space-between; width: 100%;">
13
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
14
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
15
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
16
+ </p>
17
+ </div>
18
+ </div>
19
+
20
+ ## impossibleexchange/o80 - GGUF
21
+
22
+ This repo contains GGUF format model files for [impossibleexchange/o80](https://huggingface.co/impossibleexchange/o80).
23
+
24
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
25
+
26
+ <div style="text-align: left; margin: 20px 0;">
27
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
28
+ Run them on the TensorBlock client using your local machine ↗
29
+ </a>
30
+ </div>
31
+
32
+ ## Prompt template
33
+
34
+ ```
35
+ <|im_start|>system
36
+ {system_prompt}<|im_end|>
37
+ <|im_start|>user
38
+ {prompt}<|im_end|>
39
+ <|im_start|>assistant
40
+ ```
41
+
42
+ ## Model file specification
43
+
44
+ | Filename | Quant type | File Size | Description |
45
+ | -------- | ---------- | --------- | ----------- |
46
+ | [o80-Q2_K.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q2_K.gguf) | Q2_K | 3.090 GB | smallest, significant quality loss - not recommended for most purposes |
47
+ | [o80-Q3_K_S.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q3_K_S.gguf) | Q3_K_S | 3.551 GB | very small, high quality loss |
48
+ | [o80-Q3_K_M.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q3_K_M.gguf) | Q3_K_M | 3.880 GB | very small, high quality loss |
49
+ | [o80-Q3_K_L.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q3_K_L.gguf) | Q3_K_L | 4.172 GB | small, substantial quality loss |
50
+ | [o80-Q4_0.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q4_0.gguf) | Q4_0 | 4.497 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
51
+ | [o80-Q4_K_S.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q4_K_S.gguf) | Q4_K_S | 4.525 GB | small, greater quality loss |
52
+ | [o80-Q4_K_M.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q4_K_M.gguf) | Q4_K_M | 4.736 GB | medium, balanced quality - recommended |
53
+ | [o80-Q5_0.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q5_0.gguf) | Q5_0 | 5.388 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
54
+ | [o80-Q5_K_S.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q5_K_S.gguf) | Q5_K_S | 5.388 GB | large, low quality loss - recommended |
55
+ | [o80-Q5_K_M.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q5_K_M.gguf) | Q5_K_M | 5.511 GB | large, very low quality loss - recommended |
56
+ | [o80-Q6_K.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q6_K.gguf) | Q6_K | 6.334 GB | very large, extremely low quality loss |
57
+ | [o80-Q8_0.gguf](https://huggingface.co/tensorblock/o80-GGUF/blob/main/o80-Q8_0.gguf) | Q8_0 | 8.202 GB | very large, extremely low quality loss - not recommended |
58
+
59
+
60
+ ## Downloading instruction
61
+
62
+ ### Command line
63
+
64
+ Firstly, install Huggingface Client
65
+
66
+ ```shell
67
+ pip install -U "huggingface_hub[cli]"
68
+ ```
69
+
70
+ Then, downoad the individual model file the a local directory
71
+
72
+ ```shell
73
+ huggingface-cli download tensorblock/o80-GGUF --include "o80-Q2_K.gguf" --local-dir MY_LOCAL_DIR
74
+ ```
75
+
76
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
77
+
78
+ ```shell
79
+ huggingface-cli download tensorblock/o80-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
80
+ ```
o80-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0c1460c4c6165772eccd870beefe38d3022954b35797e45188915f9301d22c8
3
+ size 3090367680
o80-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7767ab8c698ddf8357af26149caf88fe098ca2e462cf06b048970ddb6b73cbf9
3
+ size 4171754688
o80-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:050aa6c3652a718bce5042a71e107ef0addf693e02a648c72f0e804de404772f
3
+ size 3880381632
o80-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0794cf6294c363a7d48642c087ead80b209b84ac6ab81c2c5e33703ca8306b9
3
+ size 3551029440
o80-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a51dcfadb9a2311e1a0eabb83d5f1abcf937dd17668cb4b7149d89eb7563f7ae
3
+ size 4497171648
o80-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20e147bae802511d49202ea19aa87d5cd7ec9e84155b6ecc171a68b88f74b639
3
+ size 4736368320
o80-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0c5a3233d5b691798a61cce4dd8fbac6cc95f2012a4221a2eb3aa0c813008c5
3
+ size 4524598464
o80-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c33bf7f4c0fc95fc30056050bacbb9e88299c03b38c6aac4ceeb34d49220d5f0
3
+ size 5387658432
o80-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d706f12e87411ba38405bb519aeb0b802c60aa6af2f738b9cb0ec2f229638287
3
+ size 5510880960
o80-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:124825c1ae6a501ab17dcdaf5ba8672ac1ea691b2b866fd8402f4e05f22aa0b1
3
+ size 5387658432
o80-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8631f0d627e8472d82193b671e732837eccb7907d3fe46e59a4d3e99e3e2fb3b
3
+ size 6333800640
o80-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a36baafa61400c70cbe9844715e2b7d1b66c9b0a560989fa0e6217e1d3a8d9cc
3
+ size 8202252480