morriszms commited on
Commit
46eb43a
·
verified ·
1 Parent(s): 3bea5d2

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Kan-LLaMA-7B-base-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Kan-LLaMA-7B-base-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Kan-LLaMA-7B-base-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Kan-LLaMA-7B-base-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Kan-LLaMA-7B-base-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Kan-LLaMA-7B-base-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Kan-LLaMA-7B-base-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Kan-LLaMA-7B-base-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Kan-LLaMA-7B-base-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Kan-LLaMA-7B-base-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Kan-LLaMA-7B-base-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Kan-LLaMA-7B-base-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Kan-LLaMA-7B-base-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2bcd5b44d4c90dc5586f511e69ee506205b8f01aafcad13652b555a1e592138
3
+ size 2615418752
Kan-LLaMA-7B-base-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e814d7549c9964c710de7ccb7133a84adcab132bcec793656e8494fae2b6092
3
+ size 3686912768
Kan-LLaMA-7B-base-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04d6dc9d39a1035df489375f83821ef9c98ce52b230faba3a041a5ebf86501a9
3
+ size 3387806464
Kan-LLaMA-7B-base-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27ee1291f7bf9d3e396064a423edf1445fdf8d5c574bbafec2a5d0d9a648df72
3
+ size 3038106368
Kan-LLaMA-7B-base-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7fb9adaa06f465c09e45366e8d168279b44cc7b22c673d2ddc3c74f66109e27
3
+ size 3925085312
Kan-LLaMA-7B-base-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e71eaf332a9cb4c4ddeace77814f760c40a2e1de9922ec87342c4aef6a627e06
3
+ size 4180282496
Kan-LLaMA-7B-base-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:029d74b0332c9d97c753053e4299da40796e8a2a382e9d5188cb80b879d2b2a2
3
+ size 3956018304
Kan-LLaMA-7B-base-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42ee10d751e27f7a4d1325a395294ab5c9e6f3837384c7fbcf4edbb5110f397d
3
+ size 4759889024
Kan-LLaMA-7B-base-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303e24eaa5496fa2bd6cc0d5731535a452bcfc7667d20a086da49dd0067afd44
3
+ size 4891354240
Kan-LLaMA-7B-base-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adb239ffaba30237d3c150b9e178541be3b5a62e68e4a6dc2c8934fff909c4e8
3
+ size 4759889024
Kan-LLaMA-7B-base-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fb14a6007d3fb35a5b55fc6d88740c1c4c7bcf3d9103066debd5ad529b4c5df
3
+ size 5646867968
Kan-LLaMA-7B-base-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d97bacf220c964797af1a75ce441658c9f6b33fda252a3ad1b44e1afbaf1bab8
3
+ size 7313324800
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - kn
5
+ - en
6
+ base_model: fierysurf/Kan-LLaMA-7B-base
7
+ tags:
8
+ - TensorBlock
9
+ - GGUF
10
+ ---
11
+
12
+ <div style="width: auto; margin-left: auto; margin-right: auto">
13
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
14
+ </div>
15
+ <div style="display: flex; justify-content: space-between; width: 100%;">
16
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
17
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
18
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
19
+ </p>
20
+ </div>
21
+ </div>
22
+
23
+ ## fierysurf/Kan-LLaMA-7B-base - GGUF
24
+
25
+ This repo contains GGUF format model files for [fierysurf/Kan-LLaMA-7B-base](https://huggingface.co/fierysurf/Kan-LLaMA-7B-base).
26
+
27
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
28
+
29
+ <div style="text-align: left; margin: 20px 0;">
30
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
31
+ Run them on the TensorBlock client using your local machine ↗
32
+ </a>
33
+ </div>
34
+
35
+ ## Prompt template
36
+
37
+ ```
38
+
39
+ ```
40
+
41
+ ## Model file specification
42
+
43
+ | Filename | Quant type | File Size | Description |
44
+ | -------- | ---------- | --------- | ----------- |
45
+ | [Kan-LLaMA-7B-base-Q2_K.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q2_K.gguf) | Q2_K | 2.615 GB | smallest, significant quality loss - not recommended for most purposes |
46
+ | [Kan-LLaMA-7B-base-Q3_K_S.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q3_K_S.gguf) | Q3_K_S | 3.038 GB | very small, high quality loss |
47
+ | [Kan-LLaMA-7B-base-Q3_K_M.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q3_K_M.gguf) | Q3_K_M | 3.388 GB | very small, high quality loss |
48
+ | [Kan-LLaMA-7B-base-Q3_K_L.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q3_K_L.gguf) | Q3_K_L | 3.687 GB | small, substantial quality loss |
49
+ | [Kan-LLaMA-7B-base-Q4_0.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q4_0.gguf) | Q4_0 | 3.925 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
50
+ | [Kan-LLaMA-7B-base-Q4_K_S.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q4_K_S.gguf) | Q4_K_S | 3.956 GB | small, greater quality loss |
51
+ | [Kan-LLaMA-7B-base-Q4_K_M.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q4_K_M.gguf) | Q4_K_M | 4.180 GB | medium, balanced quality - recommended |
52
+ | [Kan-LLaMA-7B-base-Q5_0.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q5_0.gguf) | Q5_0 | 4.760 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
53
+ | [Kan-LLaMA-7B-base-Q5_K_S.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q5_K_S.gguf) | Q5_K_S | 4.760 GB | large, low quality loss - recommended |
54
+ | [Kan-LLaMA-7B-base-Q5_K_M.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q5_K_M.gguf) | Q5_K_M | 4.891 GB | large, very low quality loss - recommended |
55
+ | [Kan-LLaMA-7B-base-Q6_K.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q6_K.gguf) | Q6_K | 5.647 GB | very large, extremely low quality loss |
56
+ | [Kan-LLaMA-7B-base-Q8_0.gguf](https://huggingface.co/tensorblock/Kan-LLaMA-7B-base-GGUF/blob/main/Kan-LLaMA-7B-base-Q8_0.gguf) | Q8_0 | 7.313 GB | very large, extremely low quality loss - not recommended |
57
+
58
+
59
+ ## Downloading instruction
60
+
61
+ ### Command line
62
+
63
+ Firstly, install Huggingface Client
64
+
65
+ ```shell
66
+ pip install -U "huggingface_hub[cli]"
67
+ ```
68
+
69
+ Then, downoad the individual model file the a local directory
70
+
71
+ ```shell
72
+ huggingface-cli download tensorblock/Kan-LLaMA-7B-base-GGUF --include "Kan-LLaMA-7B-base-Q2_K.gguf" --local-dir MY_LOCAL_DIR
73
+ ```
74
+
75
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
76
+
77
+ ```shell
78
+ huggingface-cli download tensorblock/Kan-LLaMA-7B-base-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
79
+ ```