jana-ashraf-ai commited on
Commit
d8d9cba
·
verified ·
1 Parent(s): 288967a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -1,4 +1,4 @@
1
-
2
  library_name: transformers
3
  license: apache-2.0
4
  base_model: Qwen/Qwen2.5-1.5B-Instruct
@@ -17,13 +17,13 @@ tags:
17
  - code
18
  - instruction-tuning
19
  - fine-tuned
20
-
21
 
22
  # 🐍 Python Assistant (Arabic)
23
 
24
  A fine-tuned version of **Qwen2.5-1.5B-Instruct** that answers Python programming questions in **Arabic**, with structured JSON output. Fine-tuned using LoRA via LLaMA-Factory.
25
 
26
-
27
 
28
  ## Model Details
29
 
@@ -34,13 +34,13 @@ A fine-tuned version of **Qwen2.5-1.5B-Instruct** that answers Python programmin
34
  - **License:** Apache 2.0
35
  - **Fine-tuning method:** QLoRA (LoRA rank=32) via LLaMA-Factory
36
 
37
-
38
 
39
  ## What does this model do?
40
 
41
  Given a Python programming question in English, the model returns a structured JSON answer **in Arabic**, explaining the solution step by step.
42
 
43
-
44
 
45
  ## How to Use
46
  ```python
@@ -74,7 +74,7 @@ outputs = model.generate(**inputs, max_new_tokens=512)
74
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
75
  ```
76
 
77
-
78
 
79
  ## Training Details
80
 
@@ -95,7 +95,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
95
  | Framework | LLaMA-Factory |
96
  | Hardware | Google Colab T4 GPU |
97
 
98
-
99
 
100
  ## Training Data
101
 
@@ -105,7 +105,7 @@ The answers were annotated and structured using GPT to produce Arabic explanatio
105
 
106
  **Train / Val split:** 90% / 10%
107
 
108
-
109
 
110
  ## Limitations
111
 
 
1
+ ---
2
  library_name: transformers
3
  license: apache-2.0
4
  base_model: Qwen/Qwen2.5-1.5B-Instruct
 
17
  - code
18
  - instruction-tuning
19
  - fine-tuned
20
+ ---
21
 
22
  # 🐍 Python Assistant (Arabic)
23
 
24
  A fine-tuned version of **Qwen2.5-1.5B-Instruct** that answers Python programming questions in **Arabic**, with structured JSON output. Fine-tuned using LoRA via LLaMA-Factory.
25
 
26
+ ---
27
 
28
  ## Model Details
29
 
 
34
  - **License:** Apache 2.0
35
  - **Fine-tuning method:** QLoRA (LoRA rank=32) via LLaMA-Factory
36
 
37
+ ---
38
 
39
  ## What does this model do?
40
 
41
  Given a Python programming question in English, the model returns a structured JSON answer **in Arabic**, explaining the solution step by step.
42
 
43
+ ---
44
 
45
  ## How to Use
46
  ```python
 
74
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
75
  ```
76
 
77
+ ---
78
 
79
  ## Training Details
80
 
 
95
  | Framework | LLaMA-Factory |
96
  | Hardware | Google Colab T4 GPU |
97
 
98
+ ---
99
 
100
  ## Training Data
101
 
 
105
 
106
  **Train / Val split:** 90% / 10%
107
 
108
+ ---
109
 
110
  ## Limitations
111