x0001 commited on
Commit
6ca8657
·
0 Parent(s):

Duplicate from localmodels/LLM

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ duplicated_from: localmodels/LLM
3
+ ---
4
+ # WizardLM 13B v1.1 ggml
5
+
6
+ From: https://huggingface.co/WizardLM/WizardLM-13B-V1.1
7
+
8
+ ---
9
+
10
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
11
+
12
+ Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
13
+
14
+ ### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
15
+
16
+ Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
17
+
18
+ ---
19
+
20
+ ## Files
21
+ | Name | Quant method | Bits | Size | Max RAM required, no GPU offloading | Use case |
22
+ | ---- | ---- | ---- | ---- | ---- | ----- |
23
+ | wizardlm-13b-v1.1.ggmlv3.q2_K.bin | q2_K | 2 | 5.67 GB| 8.17 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
24
+ | wizardlm-13b-v1.1.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 7.07 GB| 9.57 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
25
+ | wizardlm-13b-v1.1.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.46 GB| 8.96 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
26
+ | wizardlm-13b-v1.1.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.80 GB| 8.30 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
27
+ | wizardlm-13b-v1.1.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
28
+ | wizardlm-13b-v1.1.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
29
+ | wizardlm-13b-v1.1.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.99 GB| 10.49 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
30
+ | wizardlm-13b-v1.1.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.49 GB| 9.99 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
31
+ | wizardlm-13b-v1.1.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
32
+ | wizardlm-13b-v1.1.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
33
+ | wizardlm-13b-v1.1.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.33 GB| 11.83 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
34
+ | wizardlm-13b-v1.1.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 9.07 GB| 11.57 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
35
+ | wizardlm-13b-v1.1.ggmlv3.q6_K.bin | q6_K | 6 | 10.76 GB| 13.26 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
36
+ | wizardlm-13b-v1.1.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
wizardlm-13b-v1.1.ggmlv3.q2_K.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b1374163fd8874bb5be006ed3713c4e195051fd44650b569e537e495b1c37f5
3
+ size 5668531968
wizardlm-13b-v1.1.ggmlv3.q3_K_L.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:305e5862dda973b53a61f125f43a3651f69c43b2ee252df9c167fc9db06da2ea
3
+ size 7072640768
wizardlm-13b-v1.1.ggmlv3.q3_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5577d4c5a8144dcb399713036104ebd920f454a3733ecc28c29f0d50bb04849c
3
+ size 6456602368
wizardlm-13b-v1.1.ggmlv3.q3_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b8ff8b6867338f464b586e83c333fe06ff5ee5f2e6a188a2bd17be6fe627bdb
3
+ size 5802061568
wizardlm-13b-v1.1.ggmlv3.q4_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21ecf45e7064a25eea280ffff7e8bb9d698ad29900ceb941f2afed31a460d91c
3
+ size 7323310848
wizardlm-13b-v1.1.ggmlv3.q4_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea2828209c912b40a7d5392f5faad70bb4288ccdfd7efd07d4b1c6df414ae344
3
+ size 8136777088
wizardlm-13b-v1.1.ggmlv3.q4_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e21c0618925ca48fbb4a2e9f92d628867c54d5c6efef90a691e626820a024af4
3
+ size 7987277568
wizardlm-13b-v1.1.ggmlv3.q4_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fb6bf402f3472a75c71e55c5c91284259ab5a6d90ac5c32ee7b2c94ba7a27dc
3
+ size 7487155968
wizardlm-13b-v1.1.ggmlv3.q5_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74ed52e5675dcd0b27c56e0bd709e51f540eb1f3451101e661ee3d84ba535257
3
+ size 8950243328
wizardlm-13b-v1.1.ggmlv3.q5_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c8d9d1f4cd16998b41f8bcb5caef2c8d24aed676e2438b04e89fdb30822de69
3
+ size 9763709568
wizardlm-13b-v1.1.ggmlv3.q5_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7edd570f1cc87a29a520f8dd979b4a0b3b5bb27a84265a320c52dc0acee62314
3
+ size 9330765568
wizardlm-13b-v1.1.ggmlv3.q5_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93ac409eeb2f77d8673d34138d492925231d1942f1402e7359c392551e7bdeb6
3
+ size 9073127168
wizardlm-13b-v1.1.ggmlv3.q6_K.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f624783a2034ee42aabd97e0c6a36b49e32af3e99e19d6d558e5a6e1048a77af
3
+ size 10758221568
wizardlm-13b-v1.1.ggmlv3.q8_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8c1d3838b4b4cd67d5ecaf16de08ec5e7e5aef25a4537ea9d86494277871f14
3
+ size 13831040768