Update README.md
Browse files
README.md
CHANGED
|
@@ -1,199 +1,223 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
-
#
|
| 7 |
|
| 8 |
-
|
| 9 |
|
|
|
|
| 10 |
|
|
|
|
| 11 |
|
| 12 |
-
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
<!-- Provide a longer summary of what this model is. -->
|
| 17 |
-
|
| 18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
| 19 |
-
|
| 20 |
-
- **Developed by:** [More Information Needed]
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
-
|
| 28 |
-
### Model Sources [optional]
|
| 29 |
-
|
| 30 |
-
<!-- Provide the basic links for the model. -->
|
| 31 |
-
|
| 32 |
-
- **Repository:** [More Information Needed]
|
| 33 |
-
- **Paper [optional]:** [More Information Needed]
|
| 34 |
-
- **Demo [optional]:** [More Information Needed]
|
| 35 |
-
|
| 36 |
-
## Uses
|
| 37 |
-
|
| 38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
-
###
|
| 47 |
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
-
|
| 53 |
|
| 54 |
-
|
| 55 |
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
-
|
| 59 |
|
| 60 |
-
|
| 61 |
|
| 62 |
-
|
| 63 |
|
| 64 |
-
|
|
|
|
| 65 |
|
| 66 |
-
|
|
|
|
| 67 |
|
| 68 |
-
|
|
|
|
| 69 |
|
| 70 |
-
|
|
|
|
| 71 |
|
| 72 |
-
|
|
|
|
| 73 |
|
| 74 |
-
|
| 75 |
|
| 76 |
-
|
| 77 |
|
| 78 |
-
|
| 79 |
|
| 80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
-
|
| 83 |
|
| 84 |
### Training Procedure
|
| 85 |
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
#### Preprocessing [optional]
|
| 89 |
-
|
| 90 |
-
[More Information Needed]
|
| 91 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
-
|
| 94 |
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
#### Speeds, Sizes, Times [optional]
|
| 98 |
-
|
| 99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 100 |
-
|
| 101 |
-
[More Information Needed]
|
| 102 |
|
| 103 |
## Evaluation
|
| 104 |
|
| 105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 106 |
-
|
| 107 |
-
### Testing Data, Factors & Metrics
|
| 108 |
-
|
| 109 |
-
#### Testing Data
|
| 110 |
-
|
| 111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
| 112 |
-
|
| 113 |
-
[More Information Needed]
|
| 114 |
|
| 115 |
-
#### Factors
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
-
|
|
|
|
|
|
|
|
|
|
| 120 |
|
| 121 |
-
|
| 122 |
|
| 123 |
-
|
| 124 |
|
| 125 |
-
|
| 126 |
|
| 127 |
-
|
| 128 |
|
| 129 |
-
|
|
|
|
|
|
|
|
|
|
| 130 |
|
| 131 |
-
|
| 132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 133 |
|
|
|
|
| 134 |
|
| 135 |
-
##
|
| 136 |
|
| 137 |
-
|
| 138 |
|
| 139 |
-
|
|
|
|
|
|
|
|
|
|
| 140 |
|
| 141 |
-
|
| 142 |
|
| 143 |
-
|
|
|
|
|
|
|
| 144 |
|
| 145 |
-
|
| 146 |
|
| 147 |
-
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
|
| 153 |
-
|
| 154 |
|
| 155 |
-
|
|
|
|
|
|
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
-
|
| 160 |
|
| 161 |
-
|
|
|
|
| 162 |
|
| 163 |
-
|
| 164 |
|
| 165 |
-
|
|
|
|
| 166 |
|
| 167 |
-
|
| 168 |
|
| 169 |
-
|
|
|
|
| 170 |
|
| 171 |
-
|
|
|
|
| 172 |
|
| 173 |
-
|
| 174 |
|
| 175 |
-
|
| 176 |
|
| 177 |
-
|
|
|
|
|
|
|
|
|
|
| 178 |
|
| 179 |
-
|
| 180 |
|
| 181 |
-
|
| 182 |
|
| 183 |
-
|
| 184 |
|
| 185 |
-
|
| 186 |
|
| 187 |
-
|
| 188 |
|
| 189 |
-
|
| 190 |
|
| 191 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 192 |
|
| 193 |
-
|
| 194 |
|
| 195 |
-
|
| 196 |
|
| 197 |
-
|
|
|
|
|
|
|
| 198 |
|
| 199 |
-
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
+
datasets:
|
| 4 |
+
- HeshamHaroon/saudi-dialect-conversations
|
| 5 |
+
base_model:
|
| 6 |
+
- LiquidAI/LFM2.5-1.2B-Instruct
|
| 7 |
---
|
| 8 |
|
| 9 |
+
# Saudi Dialect LFM2.5 — Instruction-Tuned Arabic Dialect Model
|
| 10 |
|
| 11 |
+
## Model Description
|
| 12 |
|
| 13 |
+
This model is a fine-tuned version of **Liquid AI**’s **LFM2.5‑1.2B‑Instruct**, adapted for Saudi dialect conversational generation.
|
| 14 |
|
| 15 |
+
The base model belongs to the LFM2.5 family — hybrid state-space + attention language models designed for **fast on-device inference**, low memory usage, and strong performance relative to size. It contains ~1.17B parameters, 32k context length, and supports multilingual generation including Arabic. ([Hugging Face][1])
|
| 16 |
|
| 17 |
+
This fine-tuned variant specializes the model for **Saudi dialect conversational patterns**, improving fluency, dialect authenticity, and instruction following for regional Arabic use cases.
|
| 18 |
|
| 19 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
+
## Intended Use
|
| 22 |
|
| 23 |
+
### Primary Use Cases
|
| 24 |
|
| 25 |
+
* Saudi dialect chatbots
|
| 26 |
+
* Customer support assistants
|
| 27 |
+
* Conversational agents
|
| 28 |
+
* Arabic NLP research
|
| 29 |
+
* Dialect-aware RAG pipelines
|
| 30 |
+
* Dialogue generation systems
|
| 31 |
|
| 32 |
+
### Out-of-Scope Uses
|
| 33 |
|
| 34 |
+
* Legal/medical advice
|
| 35 |
+
* Safety-critical decision making
|
| 36 |
+
* High-precision knowledge tasks without retrieval
|
| 37 |
+
* Sensitive content generation
|
| 38 |
|
| 39 |
+
---
|
| 40 |
|
| 41 |
+
## Training Details
|
| 42 |
|
| 43 |
+
### Base Model
|
| 44 |
|
| 45 |
+
* Architecture: Hybrid state-space + attention
|
| 46 |
+
* Parameters: ~1.17B
|
| 47 |
+
* Context length: 32,768 tokens
|
| 48 |
+
* Training tokens: ~28T
|
| 49 |
+
* Languages: Multilingual including Arabic ([Hugging Face][1])
|
| 50 |
|
| 51 |
+
---
|
| 52 |
|
| 53 |
+
### Dataset
|
| 54 |
|
| 55 |
+
Fine-tuned on:
|
| 56 |
|
| 57 |
+
**Dataset:**
|
| 58 |
+
`HeshamHaroon/saudi-dialect-conversations`
|
| 59 |
|
| 60 |
+
**Domain:**
|
| 61 |
+
Conversational dialogue
|
| 62 |
|
| 63 |
+
**Language:**
|
| 64 |
+
Saudi dialect Arabic
|
| 65 |
|
| 66 |
+
**Format:**
|
| 67 |
+
Instruction → Response pairs
|
| 68 |
|
| 69 |
+
**Purpose:**
|
| 70 |
+
Increase dialect authenticity and conversational naturalness.
|
| 71 |
|
| 72 |
+
---
|
| 73 |
|
| 74 |
+
### Training Configuration
|
| 75 |
|
| 76 |
+
(Extracted from training notebook)
|
| 77 |
|
| 78 |
+
| Parameter | Value |
|
| 79 |
+
| --------------------- | ---------------------------- |
|
| 80 |
+
| Epochs | 4 |
|
| 81 |
+
| Learning Rate | 2e-4 |
|
| 82 |
+
| Batch Size | 16 |
|
| 83 |
+
| Gradient Accumulation | 4 |
|
| 84 |
+
| Optimizer | AdamW |
|
| 85 |
+
| LR Scheduler | Linear |
|
| 86 |
+
| Warmup Ratio | 0.03 |
|
| 87 |
+
| Sequence Length | 8096 |
|
| 88 |
+
| Precision | FP16 |
|
| 89 |
+
| Training Type | Supervised Fine-Tuning (SFT) |
|
| 90 |
|
| 91 |
+
---
|
| 92 |
|
| 93 |
### Training Procedure
|
| 94 |
|
| 95 |
+
Training was performed using:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
|
| 97 |
+
* Transformers
|
| 98 |
+
* TRL SFTTrainer
|
| 99 |
+
* LoRA fine-tuning
|
| 100 |
+
* Mixed precision
|
| 101 |
+
* Gradient accumulation
|
| 102 |
|
| 103 |
+
The base model weights were adapted rather than retrained from scratch.
|
| 104 |
|
| 105 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 106 |
|
| 107 |
## Evaluation
|
| 108 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 109 |
|
|
|
|
| 110 |
|
| 111 |
+
Qualitative evaluation indicates:
|
| 112 |
|
| 113 |
+
* Improved dialect fluency
|
| 114 |
+
* Reduced MSA leakage
|
| 115 |
+
* Better conversational tone
|
| 116 |
+
* Higher lexical authenticity
|
| 117 |
|
| 118 |
+
Dialect-specific fine-tuning is known to significantly increase dialect generation accuracy and reduce standard-Arabic drift in Arabic LLMs. ([arXiv][2])
|
| 119 |
|
| 120 |
+
---
|
| 121 |
|
| 122 |
+
## Performance Characteristics
|
| 123 |
|
| 124 |
+
**Strengths**
|
| 125 |
|
| 126 |
+
* Very fast inference
|
| 127 |
+
* Low memory footprint
|
| 128 |
+
* Strong conversational coherence
|
| 129 |
+
* Good instruction following
|
| 130 |
|
| 131 |
+
**Limitations**
|
| 132 |
|
| 133 |
+
* Smaller model → limited factual depth
|
| 134 |
+
* May hallucinate
|
| 135 |
+
* Less capable for complex reasoning vs larger models
|
| 136 |
+
* Dialect bias toward Saudi Arabic
|
| 137 |
|
| 138 |
+
---
|
| 139 |
|
| 140 |
+
## Bias, Risks, and Safety
|
| 141 |
|
| 142 |
+
Potential risks:
|
| 143 |
|
| 144 |
+
* Dialect bias
|
| 145 |
+
* Cultural bias from dataset
|
| 146 |
+
* Toxic outputs if prompted maliciously
|
| 147 |
+
* Hallucinated facts
|
| 148 |
|
| 149 |
+
Mitigations:
|
| 150 |
|
| 151 |
+
* Filtering dataset
|
| 152 |
+
* Instruction alignment
|
| 153 |
+
* Moderation layers recommended
|
| 154 |
|
| 155 |
+
---
|
| 156 |
|
| 157 |
+
## Hardware Requirements
|
|
|
|
|
|
|
|
|
|
|
|
|
| 158 |
|
| 159 |
+
Runs efficiently on:
|
| 160 |
|
| 161 |
+
* CPU inference (<1GB memory quantized)
|
| 162 |
+
* Mobile NPUs
|
| 163 |
+
* Edge devices ([Hugging Face][1])
|
| 164 |
|
| 165 |
+
---
|
| 166 |
|
| 167 |
+
## Example Usage
|
| 168 |
|
| 169 |
+
```python
|
| 170 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 171 |
|
| 172 |
+
model_id = "AyoubChLin/lfm2.5-saudi-dialect"
|
| 173 |
|
| 174 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 175 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
| 176 |
|
| 177 |
+
prompt = "تكلم باللهجة السعودية عن القهوة"
|
| 178 |
|
| 179 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
| 180 |
+
outputs = model.generate(**inputs, max_new_tokens=200)
|
| 181 |
|
| 182 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 183 |
+
```
|
| 184 |
|
| 185 |
+
---
|
| 186 |
|
| 187 |
+
## Training Compute
|
| 188 |
|
| 189 |
+
* **GPU:** 1 × NVIDIA A100 (40 GB VRAM)
|
| 190 |
+
* **CPU:** 8 cores
|
| 191 |
+
* **RAM:** 16 GiB
|
| 192 |
+
* **Compute Environment:** Cloud training instance
|
| 193 |
|
| 194 |
+
---
|
| 195 |
|
| 196 |
+
## License
|
| 197 |
|
| 198 |
+
Same as base model license unless otherwise specified.
|
| 199 |
|
| 200 |
+
---
|
| 201 |
|
| 202 |
+
## Citation
|
| 203 |
|
| 204 |
+
If you use this model:
|
| 205 |
|
| 206 |
+
```
|
| 207 |
+
@misc{saudi-dialect-lfm2.5,
|
| 208 |
+
author = {Cherguelaine Ayoub},
|
| 209 |
+
title = {Saudi Dialect LFM2.5},
|
| 210 |
+
year = {2026},
|
| 211 |
+
publisher = {Hugging Face}
|
| 212 |
+
}
|
| 213 |
+
```
|
| 214 |
|
| 215 |
+
---
|
| 216 |
|
| 217 |
+
## Acknowledgments
|
| 218 |
|
| 219 |
+
* Liquid AI for base model
|
| 220 |
+
* Dataset creators
|
| 221 |
+
* Open-source tooling ecosystem
|
| 222 |
|
| 223 |
+
---
|