morriszms commited on
Commit
15497ee
ยท
verified ยท
1 Parent(s): 97e8561

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Fanar-1-9B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Fanar-1-9B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Fanar-1-9B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Fanar-1-9B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Fanar-1-9B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Fanar-1-9B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Fanar-1-9B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Fanar-1-9B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Fanar-1-9B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Fanar-1-9B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Fanar-1-9B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Fanar-1-9B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Fanar-1-9B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a85941cda1a67f65423af25e3cc3288d228bdca80fc1f652a8dbffa887b06522
3
+ size 3426819552
Fanar-1-9B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0f3252bcb8e7e3dfdc8bed58081656736e21447af90dac26055810b2de45281
3
+ size 4753874400
Fanar-1-9B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6cd388e255ea3fefb32a93bc30a76faec8d51b31e9e2278ab40b40e711f1479
3
+ size 4383202784
Fanar-1-9B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd1e917d4ae1cd9de0e09ed2ca825cf5f9668d4b5db56d4f78d3207d5757693e
3
+ size 3959086560
Fanar-1-9B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0269064bbf10c91af015fef4314d7146e23efcde3851cd9d70ddedf003b1cf54
3
+ size 5064564192
Fanar-1-9B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5249f639accbf2ec6cb90aed84cbe00486fb9433e9565f09bfe74c629ab60365
3
+ size 5382479328
Fanar-1-9B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79a331462b9db1d84ce7c3e0f55dfb8d71562c0b5b6c7a1d94c14061c27633dd
3
+ size 5100346848
Fanar-1-9B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ac7376606a05572cf697636457733616b4033523e4c9ee3ec1cc2badb13c7b1
3
+ size 6105013728
Fanar-1-9B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d769578132d8cd69fc57a81a80ef56cf09564827114f488ddce7928b80519613
3
+ size 6268788192
Fanar-1-9B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f9a0e22261d79a3b1a7228a3cbf0e42e3f4113225b0628b68194cd7cea23850
3
+ size 6105013728
Fanar-1-9B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4be569dce86c5749974f15d85f25c7c235b7f8d985634153419f62cc9ac79ba8
3
+ size 7210491360
Fanar-1-9B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4a1f88e001e2c5c6272c7902b81aca1a685c50060aa831fbb5572251ae3bd28
3
+ size 9337688544
README.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - ar
5
+ - en
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - pytorch
9
+ - TensorBlock
10
+ - GGUF
11
+ library_name: transformers
12
+ base_model: QCRI/Fanar-1-9B
13
+ ---
14
+
15
+ <div style="width: auto; margin-left: auto; margin-right: auto">
16
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
17
+ </div>
18
+
19
+ [![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)
20
+ [![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)
21
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)
22
+ [![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)
23
+ [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)
24
+
25
+
26
+ ## QCRI/Fanar-1-9B - GGUF
27
+
28
+ <div style="text-align: left; margin: 20px 0;">
29
+ <a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
30
+ Join our Discord to learn more about what we're building โ†—
31
+ </a>
32
+ </div>
33
+
34
+ This repo contains GGUF format model files for [QCRI/Fanar-1-9B](https://huggingface.co/QCRI/Fanar-1-9B).
35
+
36
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
37
+
38
+ ## Our projects
39
+ <table border="1" cellspacing="0" cellpadding="10">
40
+ <tr>
41
+ <th colspan="2" style="font-size: 25px;">Forge</th>
42
+ </tr>
43
+ <tr>
44
+ <th colspan="2">
45
+ <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
46
+ </th>
47
+ </tr>
48
+ <tr>
49
+ <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
50
+ </tr>
51
+ <tr>
52
+ <th colspan="2">
53
+ <a href="https://github.com/TensorBlock/forge" target="_blank" style="
54
+ display: inline-block;
55
+ padding: 8px 16px;
56
+ background-color: #FF7F50;
57
+ color: white;
58
+ text-decoration: none;
59
+ border-radius: 6px;
60
+ font-weight: bold;
61
+ font-family: sans-serif;
62
+ ">๐Ÿš€ Try it now! ๐Ÿš€</a>
63
+ </th>
64
+ </tr>
65
+
66
+ <tr>
67
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
68
+ <th style="font-size: 25px;">TensorBlock Studio</th>
69
+ </tr>
70
+ <tr>
71
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
72
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
73
+ </tr>
74
+ <tr>
75
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
76
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
77
+ </tr>
78
+ <tr>
79
+ <th>
80
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
81
+ display: inline-block;
82
+ padding: 8px 16px;
83
+ background-color: #FF7F50;
84
+ color: white;
85
+ text-decoration: none;
86
+ border-radius: 6px;
87
+ font-weight: bold;
88
+ font-family: sans-serif;
89
+ ">๐Ÿ‘€ See what we built ๐Ÿ‘€</a>
90
+ </th>
91
+ <th>
92
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
93
+ display: inline-block;
94
+ padding: 8px 16px;
95
+ background-color: #FF7F50;
96
+ color: white;
97
+ text-decoration: none;
98
+ border-radius: 6px;
99
+ font-weight: bold;
100
+ font-family: sans-serif;
101
+ ">๐Ÿ‘€ See what we built ๐Ÿ‘€</a>
102
+ </th>
103
+ </tr>
104
+ </table>
105
+
106
+ ## Prompt template
107
+
108
+ ```
109
+ Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
110
+ ```
111
+
112
+ ## Model file specification
113
+
114
+ | Filename | Quant type | File Size | Description |
115
+ | -------- | ---------- | --------- | ----------- |
116
+ | [Fanar-1-9B-Q2_K.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q2_K.gguf) | Q2_K | 3.427 GB | smallest, significant quality loss - not recommended for most purposes |
117
+ | [Fanar-1-9B-Q3_K_S.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q3_K_S.gguf) | Q3_K_S | 3.959 GB | very small, high quality loss |
118
+ | [Fanar-1-9B-Q3_K_M.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q3_K_M.gguf) | Q3_K_M | 4.383 GB | very small, high quality loss |
119
+ | [Fanar-1-9B-Q3_K_L.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q3_K_L.gguf) | Q3_K_L | 4.754 GB | small, substantial quality loss |
120
+ | [Fanar-1-9B-Q4_0.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q4_0.gguf) | Q4_0 | 5.065 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
121
+ | [Fanar-1-9B-Q4_K_S.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q4_K_S.gguf) | Q4_K_S | 5.100 GB | small, greater quality loss |
122
+ | [Fanar-1-9B-Q4_K_M.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q4_K_M.gguf) | Q4_K_M | 5.382 GB | medium, balanced quality - recommended |
123
+ | [Fanar-1-9B-Q5_0.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q5_0.gguf) | Q5_0 | 6.105 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
124
+ | [Fanar-1-9B-Q5_K_S.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q5_K_S.gguf) | Q5_K_S | 6.105 GB | large, low quality loss - recommended |
125
+ | [Fanar-1-9B-Q5_K_M.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q5_K_M.gguf) | Q5_K_M | 6.269 GB | large, very low quality loss - recommended |
126
+ | [Fanar-1-9B-Q6_K.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q6_K.gguf) | Q6_K | 7.210 GB | very large, extremely low quality loss |
127
+ | [Fanar-1-9B-Q8_0.gguf](https://huggingface.co/tensorblock/QCRI_Fanar-1-9B-GGUF/blob/main/Fanar-1-9B-Q8_0.gguf) | Q8_0 | 9.338 GB | very large, extremely low quality loss - not recommended |
128
+
129
+
130
+ ## Downloading instruction
131
+
132
+ ### Command line
133
+
134
+ Firstly, install Huggingface Client
135
+
136
+ ```shell
137
+ pip install -U "huggingface_hub[cli]"
138
+ ```
139
+
140
+ Then, downoad the individual model file the a local directory
141
+
142
+ ```shell
143
+ huggingface-cli download tensorblock/QCRI_Fanar-1-9B-GGUF --include "Fanar-1-9B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
144
+ ```
145
+
146
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
147
+
148
+ ```shell
149
+ huggingface-cli download tensorblock/QCRI_Fanar-1-9B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
150
+ ```