Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -122,7 +122,7 @@ The dataset was produced through a structured pipeline:
|
|
| 122 |
* Controlled synthetic generation to expand coverage while keeping the same voice
|
| 123 |
* A dialect rule-set (positive/negative constraints) to:
|
| 124 |
|
| 125 |
-
* encourage Bahraini lexical markers (e.g., وايد، جذي، هني، شلون، عقبها/بعدها
|
| 126 |
* discourage MSA scaffolding and overly formal connectors
|
| 127 |
* keep responses short and practical
|
| 128 |
* Template correctness via the ALLaM chat template, with EOS enforcement
|
|
@@ -146,26 +146,37 @@ Data was formatted using ALLaM’s chat template:
|
|
| 146 |
|
| 147 |
Base configuration used during the run:
|
| 148 |
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
|
| 170 |
### Notes on Tokenizer / Special Tokens
|
| 171 |
|
|
|
|
| 122 |
* Controlled synthetic generation to expand coverage while keeping the same voice
|
| 123 |
* A dialect rule-set (positive/negative constraints) to:
|
| 124 |
|
| 125 |
+
* encourage Bahraini lexical markers (e.g., وايد، جذي، هني، شلون، عقبها/بعدها)
|
| 126 |
* discourage MSA scaffolding and overly formal connectors
|
| 127 |
* keep responses short and practical
|
| 128 |
* Template correctness via the ALLaM chat template, with EOS enforcement
|
|
|
|
| 146 |
|
| 147 |
Base configuration used during the run:
|
| 148 |
|
| 149 |
+
```yaml
|
| 150 |
+
max_seq_length: 2048
|
| 151 |
+
optimizer: adamw_torch
|
| 152 |
+
learning_rate: 2e-5
|
| 153 |
+
lr_scheduler: cosine
|
| 154 |
+
warmup_ratio: 0.1
|
| 155 |
+
weight_decay: 0.01
|
| 156 |
+
max_grad_norm: 1.0
|
| 157 |
+
per_device_train_batch_size: 4
|
| 158 |
+
gradient_accumulation_steps: 16
|
| 159 |
+
num_train_epochs: 4
|
| 160 |
+
packing: false
|
| 161 |
+
seed: 42
|
| 162 |
+
precision: fp16 (T4) / bf16 (Ampere+)
|
| 163 |
+
attention_implementation: eager
|
| 164 |
+
gradient_checkpointing: true
|
| 165 |
+
gradient_checkpointing_kwargs:
|
| 166 |
+
use_reentrant: false
|
| 167 |
+
lora:
|
| 168 |
+
r: 16
|
| 169 |
+
alpha: 32
|
| 170 |
+
dropout: 0.05
|
| 171 |
+
target_modules:
|
| 172 |
+
- q_proj
|
| 173 |
+
- k_proj
|
| 174 |
+
- v_proj
|
| 175 |
+
- o_proj
|
| 176 |
+
- gate_proj
|
| 177 |
+
- up_proj
|
| 178 |
+
- down_proj
|
| 179 |
+
```
|
| 180 |
|
| 181 |
### Notes on Tokenizer / Special Tokens
|
| 182 |
|