morriszms commited on
Commit
db65830
·
verified ·
1 Parent(s): 4df0a6e

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,12 +1,19 @@
1
  ---
 
 
 
2
  license: apache-2.0
3
- pipeline_tag: text-generation
4
  tags:
5
- - finetuned
 
 
 
 
 
 
6
  - TensorBlock
7
  - GGUF
8
- inference: false
9
- base_model: PIXMELT/Mistral-7B-Instruct-v0.2
10
  ---
11
 
12
  <div style="width: auto; margin-left: auto; margin-right: auto">
@@ -20,11 +27,11 @@ base_model: PIXMELT/Mistral-7B-Instruct-v0.2
20
  </div>
21
  </div>
22
 
23
- ## PIXMELT/Mistral-7B-Instruct-v0.2 - GGUF
24
 
25
- This repo contains GGUF format model files for [PIXMELT/Mistral-7B-Instruct-v0.2](https://huggingface.co/PIXMELT/Mistral-7B-Instruct-v0.2).
26
 
27
- The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
28
 
29
  <div style="text-align: left; margin: 20px 0;">
30
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
@@ -35,25 +42,27 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
35
  ## Prompt template
36
 
37
  ```
38
- <s>[INST] {prompt} [/INST]
 
 
39
  ```
40
 
41
  ## Model file specification
42
 
43
  | Filename | Quant type | File Size | Description |
44
  | -------- | ---------- | --------- | ----------- |
45
- | [Mistral-7B-Instruct-v0.2-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
46
- | [Mistral-7B-Instruct-v0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
47
- | [Mistral-7B-Instruct-v0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
48
- | [Mistral-7B-Instruct-v0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
49
- | [Mistral-7B-Instruct-v0.2-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
50
- | [Mistral-7B-Instruct-v0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
51
- | [Mistral-7B-Instruct-v0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
52
- | [Mistral-7B-Instruct-v0.2-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
53
- | [Mistral-7B-Instruct-v0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
54
- | [Mistral-7B-Instruct-v0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
55
- | [Mistral-7B-Instruct-v0.2-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
56
- | [Mistral-7B-Instruct-v0.2-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
57
 
58
 
59
  ## Downloading instruction
@@ -69,11 +78,11 @@ pip install -U "huggingface_hub[cli]"
69
  Then, downoad the individual model file the a local directory
70
 
71
  ```shell
72
- huggingface-cli download tensorblock/Mistral-7B-Instruct-v0.2-GGUF --include "Mistral-7B-Instruct-v0.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
73
  ```
74
 
75
  If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
76
 
77
  ```shell
78
- huggingface-cli download tensorblock/Mistral-7B-Instruct-v0.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
79
  ```
 
1
  ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
  license: apache-2.0
 
6
  tags:
7
+ - unsloth
8
+ - transformers
9
+ - mistral
10
+ - mistral-7b
11
+ - mistral-instruct
12
+ - instruct
13
+ - bnb
14
  - TensorBlock
15
  - GGUF
16
+ base_model: unsloth/mistral-7b-instruct-v0.2
 
17
  ---
18
 
19
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
27
  </div>
28
  </div>
29
 
30
+ ## unsloth/mistral-7b-instruct-v0.2 - GGUF
31
 
32
+ This repo contains GGUF format model files for [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2).
33
 
34
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
35
 
36
  <div style="text-align: left; margin: 20px 0;">
37
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
 
42
  ## Prompt template
43
 
44
  ```
45
+ <s> [INST] {system_prompt}
46
+
47
+ {prompt} [/INST]
48
  ```
49
 
50
  ## Model file specification
51
 
52
  | Filename | Quant type | File Size | Description |
53
  | -------- | ---------- | --------- | ----------- |
54
+ | [mistral-7b-instruct-v0.2-Q2_K.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
55
+ | [mistral-7b-instruct-v0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
56
+ | [mistral-7b-instruct-v0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
57
+ | [mistral-7b-instruct-v0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
58
+ | [mistral-7b-instruct-v0.2-Q4_0.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
59
+ | [mistral-7b-instruct-v0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
60
+ | [mistral-7b-instruct-v0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
61
+ | [mistral-7b-instruct-v0.2-Q5_0.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
62
+ | [mistral-7b-instruct-v0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
63
+ | [mistral-7b-instruct-v0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
64
+ | [mistral-7b-instruct-v0.2-Q6_K.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
65
+ | [mistral-7b-instruct-v0.2-Q8_0.gguf](https://huggingface.co/tensorblock/mistral-7b-instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
66
 
67
 
68
  ## Downloading instruction
 
78
  Then, downoad the individual model file the a local directory
79
 
80
  ```shell
81
+ huggingface-cli download tensorblock/mistral-7b-instruct-v0.2-GGUF --include "mistral-7b-instruct-v0.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
82
  ```
83
 
84
  If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
85
 
86
  ```shell
87
+ huggingface-cli download tensorblock/mistral-7b-instruct-v0.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
88
  ```
mistral-7b-instruct-v0.2-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d91150fe9e590a341bf675b5c601c606b698318c8836ff3e59dfc93c26b588a
3
+ size 2719243808
mistral-7b-instruct-v0.2-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73c37852ee472679da2ad1b9d2f03e0151e893b6cd8a03a59538fa2289c74344
3
+ size 3822026272
mistral-7b-instruct-v0.2-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a9898c645ef8494025226e3b7df829ae36aa285a77609aaa51d9cc7f1d0bc0b
3
+ size 3518987808
mistral-7b-instruct-v0.2-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e2907129c94b1710843558b679aa1c3d41b6141d0d61229da5d859b68b78a9f
3
+ size 3164569120
mistral-7b-instruct-v0.2-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72cfdb68ce047e529c0185c34296eaf0bc9fa01dd5d4cb940f72c07312e72ba1
3
+ size 4108918304
mistral-7b-instruct-v0.2-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c73c39b5331624adeba3f0858be26cdccd9537db57ff47790ce1339f765cf32
3
+ size 4368440864
mistral-7b-instruct-v0.2-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d477ba6773b91fc50efb4ec136e46756456b3a6786074d087dcd243f5c41026
3
+ size 4140375584
mistral-7b-instruct-v0.2-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7eded237e484ebc01d7461649a8f54a606a1285199f43756dc5246157a1d876
3
+ size 4997717536
mistral-7b-instruct-v0.2-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c513827bd4df877ead0a43e5de95203607a6eb6e0581fb75948e9d60d03c57a
3
+ size 5131410976
mistral-7b-instruct-v0.2-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4af7eb7e1139900406a904fd40e7fe01b4cda24fdc226bb6bda7e785a0af847e
3
+ size 4997717536
mistral-7b-instruct-v0.2-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0832362790468171a612f41411c9edd79ccfb23f09c15a91c4f696fc915f8ce6
3
+ size 5942066720
mistral-7b-instruct-v0.2-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9efe9f9fa381eb94b075587601e96769ae89d68dbe5db3973ade2361082f2140
3
+ size 7695859232