morriszms commited on
Commit
31c5045
·
verified ·
1 Parent(s): 8a5064d

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Mistralopithecus-v0.1-10.8B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Mistralopithecus-v0.1-10.8B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Mistralopithecus-v0.1-10.8B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Mistralopithecus-v0.1-10.8B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Mistralopithecus-v0.1-10.8B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Mistralopithecus-v0.1-10.8B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Mistralopithecus-v0.1-10.8B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Mistralopithecus-v0.1-10.8B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Mistralopithecus-v0.1-10.8B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Mistralopithecus-v0.1-10.8B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Mistralopithecus-v0.1-10.8B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Mistralopithecus-v0.1-10.8B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Mistralopithecus-v0.1-10.8B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6470c67b10f3fca011ae6c0f3b7f639445c7cc2811ad72ca36de52b955ed598e
3
+ size 4069901536
Mistralopithecus-v0.1-10.8B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87c559f6acb7a78b347d2ad7c17a959473f951f6cf0ad2d3d93c8d69d0b793cc
3
+ size 5723282752
Mistralopithecus-v0.1-10.8B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a9b5dfe05001484e3dcf49fa44c4c863b1810db187427d9397c35d990086d13
3
+ size 5268200768
Mistralopithecus-v0.1-10.8B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59776d90766cf625aaa8b4aec4b7b15d889e0c21670b0be9f1f0e57b6e2ce0b9
3
+ size 4737097024
Mistralopithecus-v0.1-10.8B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f844505e1802141751ff744a6bfc51c6e506c54b45f76c95fdf8e54ebac0230
3
+ size 6152584480
Mistralopithecus-v0.1-10.8B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:813ea7164b3a82c46aa330d9900c9fff5e4d40e96ca929747bc80a59aba83ba6
3
+ size 6541868320
Mistralopithecus-v0.1-10.8B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1affe485ed8fc5654ecedb40f2bb80a5ac6e93710eaa4c649c8d238fe56a5758
3
+ size 6198721824
Mistralopithecus-v0.1-10.8B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5208c7768d7d39c31613bfdaf825bff9569f6278f4d15c7a6e53bc8daf785323
3
+ size 7484807968
Mistralopithecus-v0.1-10.8B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47e5f5ebb86dded518e57b3625808825802faaa11a371b973f38b5c0ca2d337e
3
+ size 7685348128
Mistralopithecus-v0.1-10.8B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc866ebfb1c63560c98d238a39c974117507f1b5c54726f580a6478a918a767d
3
+ size 7484807968
Mistralopithecus-v0.1-10.8B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99db609f2f7e3f399796dd3ee9e8fe66e62dfc2b95ee00f28ec987fd473e265c
3
+ size 8900295424
Mistralopithecus-v0.1-10.8B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e20b5d3531c07f4f6e25e2c906e373f91b594a0641a1a98b82796ad4fa75c947
3
+ size 11527204672
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ base_model: DopeorNope/Mistralopithecus-v0.1-10.8B
4
+ tags:
5
+ - TensorBlock
6
+ - GGUF
7
+ ---
8
+
9
+ <div style="width: auto; margin-left: auto; margin-right: auto">
10
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
11
+ </div>
12
+ <div style="display: flex; justify-content: space-between; width: 100%;">
13
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
14
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
15
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
16
+ </p>
17
+ </div>
18
+ </div>
19
+
20
+ ## DopeorNope/Mistralopithecus-v0.1-10.8B - GGUF
21
+
22
+ This repo contains GGUF format model files for [DopeorNope/Mistralopithecus-v0.1-10.8B](https://huggingface.co/DopeorNope/Mistralopithecus-v0.1-10.8B).
23
+
24
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
25
+
26
+ <div style="text-align: left; margin: 20px 0;">
27
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
28
+ Run them on the TensorBlock client using your local machine ↗
29
+ </a>
30
+ </div>
31
+
32
+ ## Prompt template
33
+
34
+ ```
35
+
36
+ ```
37
+
38
+ ## Model file specification
39
+
40
+ | Filename | Quant type | File Size | Description |
41
+ | -------- | ---------- | --------- | ----------- |
42
+ | [Mistralopithecus-v0.1-10.8B-Q2_K.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q2_K.gguf) | Q2_K | 3.790 GB | smallest, significant quality loss - not recommended for most purposes |
43
+ | [Mistralopithecus-v0.1-10.8B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q3_K_S.gguf) | Q3_K_S | 4.412 GB | very small, high quality loss |
44
+ | [Mistralopithecus-v0.1-10.8B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q3_K_M.gguf) | Q3_K_M | 4.906 GB | very small, high quality loss |
45
+ | [Mistralopithecus-v0.1-10.8B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q3_K_L.gguf) | Q3_K_L | 5.330 GB | small, substantial quality loss |
46
+ | [Mistralopithecus-v0.1-10.8B-Q4_0.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q4_0.gguf) | Q4_0 | 5.730 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
47
+ | [Mistralopithecus-v0.1-10.8B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q4_K_S.gguf) | Q4_K_S | 5.773 GB | small, greater quality loss |
48
+ | [Mistralopithecus-v0.1-10.8B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q4_K_M.gguf) | Q4_K_M | 6.093 GB | medium, balanced quality - recommended |
49
+ | [Mistralopithecus-v0.1-10.8B-Q5_0.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q5_0.gguf) | Q5_0 | 6.971 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
50
+ | [Mistralopithecus-v0.1-10.8B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q5_K_S.gguf) | Q5_K_S | 6.971 GB | large, low quality loss - recommended |
51
+ | [Mistralopithecus-v0.1-10.8B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q5_K_M.gguf) | Q5_K_M | 7.158 GB | large, very low quality loss - recommended |
52
+ | [Mistralopithecus-v0.1-10.8B-Q6_K.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q6_K.gguf) | Q6_K | 8.289 GB | very large, extremely low quality loss |
53
+ | [Mistralopithecus-v0.1-10.8B-Q8_0.gguf](https://huggingface.co/tensorblock/Mistralopithecus-v0.1-10.8B-GGUF/blob/main/Mistralopithecus-v0.1-10.8B-Q8_0.gguf) | Q8_0 | 10.736 GB | very large, extremely low quality loss - not recommended |
54
+
55
+
56
+ ## Downloading instruction
57
+
58
+ ### Command line
59
+
60
+ Firstly, install Huggingface Client
61
+
62
+ ```shell
63
+ pip install -U "huggingface_hub[cli]"
64
+ ```
65
+
66
+ Then, downoad the individual model file the a local directory
67
+
68
+ ```shell
69
+ huggingface-cli download tensorblock/Mistralopithecus-v0.1-10.8B-GGUF --include "Mistralopithecus-v0.1-10.8B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
70
+ ```
71
+
72
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
73
+
74
+ ```shell
75
+ huggingface-cli download tensorblock/Mistralopithecus-v0.1-10.8B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
76
+ ```