Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ base_model:
|
|
| 12 |
|
| 13 |
This model is a fine-tuned version of **Liquid AI**’s **LFM2.5‑1.2B‑Instruct**, adapted for Saudi dialect conversational generation.
|
| 14 |
|
| 15 |
-
The base model belongs to the LFM2.5 family — hybrid state-space + attention language models designed for **fast on-device inference**,
|
| 16 |
|
| 17 |
This fine-tuned variant specializes the model for **Saudi dialect conversational patterns**, improving fluency, dialect authenticity, and instruction following for regional Arabic use cases.
|
| 18 |
|
|
@@ -46,7 +46,7 @@ This fine-tuned variant specializes the model for **Saudi dialect conversational
|
|
| 46 |
* Parameters: ~1.17B
|
| 47 |
* Context length: 32,768 tokens
|
| 48 |
* Training tokens: ~28T
|
| 49 |
-
* Languages: Multilingual including Arabic
|
| 50 |
|
| 51 |
---
|
| 52 |
|
|
@@ -115,7 +115,7 @@ Qualitative evaluation indicates:
|
|
| 115 |
* Better conversational tone
|
| 116 |
* Higher lexical authenticity
|
| 117 |
|
| 118 |
-
Dialect-specific fine-tuning is known to significantly increase dialect generation accuracy and reduce standard-Arabic drift in Arabic LLMs.
|
| 119 |
|
| 120 |
---
|
| 121 |
|
|
@@ -160,7 +160,7 @@ Runs efficiently on:
|
|
| 160 |
|
| 161 |
* CPU inference (<1GB memory quantized)
|
| 162 |
* Mobile NPUs
|
| 163 |
-
* Edge devices
|
| 164 |
|
| 165 |
---
|
| 166 |
|
|
|
|
| 12 |
|
| 13 |
This model is a fine-tuned version of **Liquid AI**’s **LFM2.5‑1.2B‑Instruct**, adapted for Saudi dialect conversational generation.
|
| 14 |
|
| 15 |
+
The base model belongs to the LFM2.5 family — hybrid state-space + attention language models designed for **fast on-device inference**,low memory usage, and strong performance relative to size. It contains ~1.17B parameters, 32k context length, and supports multilingual generation including Arabic.
|
| 16 |
|
| 17 |
This fine-tuned variant specializes the model for **Saudi dialect conversational patterns**, improving fluency, dialect authenticity, and instruction following for regional Arabic use cases.
|
| 18 |
|
|
|
|
| 46 |
* Parameters: ~1.17B
|
| 47 |
* Context length: 32,768 tokens
|
| 48 |
* Training tokens: ~28T
|
| 49 |
+
* Languages: Multilingual including Arabic
|
| 50 |
|
| 51 |
---
|
| 52 |
|
|
|
|
| 115 |
* Better conversational tone
|
| 116 |
* Higher lexical authenticity
|
| 117 |
|
| 118 |
+
Dialect-specific fine-tuning is known to significantly increase dialect generation accuracy and reduce standard-Arabic drift in Arabic LLMs.
|
| 119 |
|
| 120 |
---
|
| 121 |
|
|
|
|
| 160 |
|
| 161 |
* CPU inference (<1GB memory quantized)
|
| 162 |
* Mobile NPUs
|
| 163 |
+
* Edge devices
|
| 164 |
|
| 165 |
---
|
| 166 |
|