yasserrmd commited on
Commit
b3aecbc
·
verified ·
1 Parent(s): f0a5820

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -6,6 +6,7 @@ base_model:
6
  - QCRI/Fanar-1-9B-Instruct
7
  pipeline_tag: text-generation
8
  ---
 
9
  # Fanar-1-9B-Instruct — GGUF quantized
10
 
11
  This repo contains multiple **GGUF** builds of the Arabic-English LLM **Fanar-1-9B-Instruct**, the instruction-tuned variant of Fanar-1-9B created by QCRI / HBKU. The base model is a 9 B-parameter continuation of **gemma-2-9b** trained on ≈1 T Arabic + English tokens and aligned through SFT → DPO (4.5 M / 250 K pairs). License remains **Apache-2.0** and the context window is **4 096 tokens**. :contentReference[oaicite:0]{index=0}
@@ -33,3 +34,28 @@ This repo contains multiple **GGUF** builds of the Arabic-English LLM **Fanar-1-
33
  ```bash
34
  git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j
35
  ./main -m Fanar-1-9B-Instruct.Q4_K_M.gguf -p "ما هي عاصمة قطر؟"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - QCRI/Fanar-1-9B-Instruct
7
  pipeline_tag: text-generation
8
  ---
9
+
10
  # Fanar-1-9B-Instruct — GGUF quantized
11
 
12
  This repo contains multiple **GGUF** builds of the Arabic-English LLM **Fanar-1-9B-Instruct**, the instruction-tuned variant of Fanar-1-9B created by QCRI / HBKU. The base model is a 9 B-parameter continuation of **gemma-2-9b** trained on ≈1 T Arabic + English tokens and aligned through SFT → DPO (4.5 M / 250 K pairs). License remains **Apache-2.0** and the context window is **4 096 tokens**. :contentReference[oaicite:0]{index=0}
 
34
  ```bash
35
  git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j
36
  ./main -m Fanar-1-9B-Instruct.Q4_K_M.gguf -p "ما هي عاصمة قطر؟"
37
+ ````
38
+
39
+ ## Python (llama-cpp-python)
40
+
41
+ ```python
42
+ from llama_cpp import Llama
43
+ llm = Llama(
44
+ model_path="Fanar-1-9B-Instruct.Q4_K_M.gguf",
45
+ n_ctx=4096,
46
+ chat_format="gemma" # Fanar follows Gemma chat template
47
+ )
48
+ print(llm.create_chat_completion(
49
+ messages=[{"role":"user","content":"Translate 'peace' to Arabic"}]
50
+ ).choices[0].message.content)
51
+ ```
52
+
53
+ ---
54
+
55
+ ### Credits & notes
56
+
57
+ * **Original model:** [`QCRI/Fanar-1-9B-Instruct`](https://huggingface.co/QCRI/Fanar-1-9B-Instruct) (please consult its model card for training data, evaluation results and limitations). ([Hugging Face][1])
58
+ * This repository only supplies GGUF conversions for efficient local inference on CPU/GPU; no weights were changed.
59
+ * Use responsibly—outputs may be inaccurate, biased, or culturally sensitive.
60
+
61
+ [1]: https://huggingface.co/QCRI/Fanar-1-9B-Instruct?utm_source=chatgpt.com "QCRI/Fanar-1-9B-Instruct - Hugging Face"