morriszms commited on
Commit
763b5c7
·
verified ·
1 Parent(s): 3544d25

Upload folder using huggingface_hub

Browse files
Mistral-7B-Instruct-v0.2-Q2_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9c866c050f12574b6299d95d81af9309f319012d168e3c4082037fb50be41d65
3
- size 2719242880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a544819dbd56aaf233ac9b499359ca5956f3d2815a6b95215302cc21ed92278c
3
+ size 2719243520
Mistral-7B-Instruct-v0.2-Q3_K_L.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:917afbf63d47e89673b232893296240b561551ee9e6c0d135efce075c19ad84d
3
- size 3822025344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92536c65fb55cd86094ebaf6997705fe24925f5f79daa4978c1ce4936b0b1fbe
3
+ size 3822025984
Mistral-7B-Instruct-v0.2-Q3_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0bd10198bb880923f6cbb55c1cbb6aae3a17a612b5fbbfc6e9d8dc97f8b31660
3
- size 3518986880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e94673e8f59a0adabd7320e8393862d114a1e7baba0bfddcdd37b0f48dfe6c8
3
+ size 3518987520
Mistral-7B-Instruct-v0.2-Q3_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7c620501d95d3cc24cdcf486166e6c375881269b96b80a796d675dc0787851e6
3
- size 3164568192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12321fd368b9b546a1fdc025823581320de47e03e1db8c34148e074da42273d3
3
+ size 3164568832
Mistral-7B-Instruct-v0.2-Q4_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e17b23b3e5df5ffc0ecb4e8f89c6242e7180d31f6cf619dc85895c74c455ed4f
3
- size 4108917376
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f044c432bd52ffe3fd3217a859a7bf05498d92de28b89548971ac99bb56b52cc
3
+ size 4108918016
Mistral-7B-Instruct-v0.2-Q4_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d3434947fb86f13bc1f3eb56cb44150c5c503e7aa57b7d8744e930e363e2d60b
3
- size 4368439936
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82381d1510fd8465c99e5d6791cb4d80b60b8e8e2853d960ab6522d57a06f0fa
3
+ size 4368440576
Mistral-7B-Instruct-v0.2-Q4_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c99766ca8d2edd264dc14e7de3d399a4cb85cdcad4ce99e64a883de1c3cb7330
3
- size 4140374656
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:742d96afb16d6b48c985138c35f60e4a62449ccb6686e38f70da9ca2aaf0af9a
3
+ size 4140375296
Mistral-7B-Instruct-v0.2-Q5_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:80bb9973f1031cbcabf24bf18ed1b68436da6ad4aba1e46e5c99f06ef10cf373
3
- size 4997716608
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34fa705a0ac8c1af5c04e721b032d7f771e93a9c155ce93d764cff55e701284d
3
+ size 4997717248
Mistral-7B-Instruct-v0.2-Q5_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d9a4fb0e0ec3cfccd52c671606a96665891456b463aa892a2be2376703c69d74
3
- size 5131410048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad3c4c14dad5c26de196ece91dd35e6f51731e6eb6be3900e20dd07f0c46d7c7
3
+ size 5131410688
Mistral-7B-Instruct-v0.2-Q5_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e629e760c155855fe7a8344477e975ad2d05db08ff47c09d884e8f3d87f6cf7b
3
- size 4997716608
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a97a8bfacfde09619b9b43bc96d6772400d934ea3663a88a079df8e31ff3dd6e
3
+ size 4997717248
Mistral-7B-Instruct-v0.2-Q6_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2780a4fd97a1c0c17f639834f9f76b0550ce18ca9cdb0ed34116ac4605c2933a
3
- size 5942065792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fbe5e5da957e1a515f1e81882c2108a1b7288d18844167d893d235f2296dbd0
3
+ size 5942066432
Mistral-7B-Instruct-v0.2-Q8_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:94c3b7d0066c1ee5876cc862395838ec4aee8baa3dfb4b731a705567f3ac9446
3
- size 7695858304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34784c0e302df733f6805e5a7e404cfec83d0a328fa78de56d14609ca9f3cf87
3
+ size 7695858944
README.md CHANGED
@@ -1,16 +1,19 @@
1
  ---
2
  license: apache-2.0
3
- pipeline_tag: text-generation
4
  tags:
5
  - finetuned
6
  - TensorBlock
7
  - GGUF
 
 
8
  inference: true
9
  widget:
10
  - messages:
11
  - role: user
12
  content: What is your favorite condiment?
13
- base_model: MaziyarPanahi/Mistral-7B-Instruct-v0.2
 
 
14
  ---
15
 
16
  <div style="width: auto; margin-left: auto; margin-right: auto">
@@ -24,9 +27,9 @@ base_model: MaziyarPanahi/Mistral-7B-Instruct-v0.2
24
  </div>
25
  </div>
26
 
27
- ## MaziyarPanahi/Mistral-7B-Instruct-v0.2 - GGUF
28
 
29
- This repo contains GGUF format model files for [MaziyarPanahi/Mistral-7B-Instruct-v0.2](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.2).
30
 
31
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
32
 
@@ -39,25 +42,27 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
39
  ## Prompt template
40
 
41
  ```
42
- <s>[INST] {prompt} [/INST]
 
 
43
  ```
44
 
45
  ## Model file specification
46
 
47
  | Filename | Quant type | File Size | Description |
48
  | -------- | ---------- | --------- | ----------- |
49
- | [Mistral-7B-Instruct-v0.2-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q2_K.gguf) | Q2_K | 2.532 GB | smallest, significant quality loss - not recommended for most purposes |
50
- | [Mistral-7B-Instruct-v0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q3_K_S.gguf) | Q3_K_S | 2.947 GB | very small, high quality loss |
51
- | [Mistral-7B-Instruct-v0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q3_K_M.gguf) | Q3_K_M | 3.277 GB | very small, high quality loss |
52
- | [Mistral-7B-Instruct-v0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q3_K_L.gguf) | Q3_K_L | 3.560 GB | small, substantial quality loss |
53
- | [Mistral-7B-Instruct-v0.2-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
54
- | [Mistral-7B-Instruct-v0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q4_K_S.gguf) | Q4_K_S | 3.856 GB | small, greater quality loss |
55
- | [Mistral-7B-Instruct-v0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q4_K_M.gguf) | Q4_K_M | 4.068 GB | medium, balanced quality - recommended |
56
- | [Mistral-7B-Instruct-v0.2-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q5_0.gguf) | Q5_0 | 4.654 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
57
- | [Mistral-7B-Instruct-v0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q5_K_S.gguf) | Q5_K_S | 4.654 GB | large, low quality loss - recommended |
58
- | [Mistral-7B-Instruct-v0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q5_K_M.gguf) | Q5_K_M | 4.779 GB | large, very low quality loss - recommended |
59
- | [Mistral-7B-Instruct-v0.2-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q6_K.gguf) | Q6_K | 5.534 GB | very large, extremely low quality loss |
60
- | [Mistral-7B-Instruct-v0.2-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q8_0.gguf) | Q8_0 | 7.167 GB | very large, extremely low quality loss - not recommended |
61
 
62
 
63
  ## Downloading instruction
 
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - finetuned
5
  - TensorBlock
6
  - GGUF
7
+ pipeline_tag: text-generation
8
+ new_version: mistralai/Mistral-7B-Instruct-v0.3
9
  inference: true
10
  widget:
11
  - messages:
12
  - role: user
13
  content: What is your favorite condiment?
14
+ extra_gated_description: If you want to learn more about how we process your personal
15
+ data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
16
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
17
  ---
18
 
19
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
27
  </div>
28
  </div>
29
 
30
+ ## mistralai/Mistral-7B-Instruct-v0.2 - GGUF
31
 
32
+ This repo contains GGUF format model files for [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
33
 
34
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
35
 
 
42
  ## Prompt template
43
 
44
  ```
45
+ <s> [INST] {system_prompt}
46
+
47
+ {prompt} [/INST]
48
  ```
49
 
50
  ## Model file specification
51
 
52
  | Filename | Quant type | File Size | Description |
53
  | -------- | ---------- | --------- | ----------- |
54
+ | [Mistral-7B-Instruct-v0.2-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
55
+ | [Mistral-7B-Instruct-v0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
56
+ | [Mistral-7B-Instruct-v0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
57
+ | [Mistral-7B-Instruct-v0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
58
+ | [Mistral-7B-Instruct-v0.2-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
59
+ | [Mistral-7B-Instruct-v0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
60
+ | [Mistral-7B-Instruct-v0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
61
+ | [Mistral-7B-Instruct-v0.2-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
62
+ | [Mistral-7B-Instruct-v0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
63
+ | [Mistral-7B-Instruct-v0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
64
+ | [Mistral-7B-Instruct-v0.2-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
65
+ | [Mistral-7B-Instruct-v0.2-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/Mistral-7B-Instruct-v0.2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
66
 
67
 
68
  ## Downloading instruction