Update README.md
Browse files
README.md
CHANGED
|
@@ -13,18 +13,19 @@ pipeline_tag: question-answering
|
|
| 13 |
|
| 14 |
# SILMA-9B-Instruct Fine-Tuned for Arabic QA
|
| 15 |
|
| 16 |
-

|
| 17 |
|
| 18 |
-
[](https://huggingface.co/MohammedNasser/silma_9b_instruct_ft)
|
| 19 |
-
[](https://opensource.org/licenses/MIT)
|
| 20 |
-
[](https://www.python.org/downloads/release/python-390/)
|
| 21 |
|
| 22 |
This model is a fine-tuned version of [silma-ai/SILMA-9B-Instruct-v1.0](https://huggingface.co/silma-ai/SILMA-9B-Instruct-v1.0), optimized for Arabic Question Answering tasks. It excels at providing numerical answers to a wide range of questions in Arabic.
|
| 23 |
|
| 24 |
## Model Details
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
- **Task**: Arabic Question Answering (Numerical Responses)
|
| 27 |
-
- **Fine-tuning Technique**: LoRA (Low-Rank Adaptation)
|
| 28 |
- **Training Data**: [Custom Arabic Reasoning QA dataset](https://huggingface.co/MohammedNasser/ARabic_Reasoning_QA)
|
| 29 |
- **Quantization**: 4-bit quantization using bitsandbytes
|
| 30 |
|
|
@@ -33,7 +34,7 @@ This model is a fine-tuned version of [silma-ai/SILMA-9B-Instruct-v1.0](https://
|
|
| 33 |
- Optimized for Arabic language understanding and generation
|
| 34 |
- Specialized in providing numerical answers to questions
|
| 35 |
- Efficient inference with 4-bit quantization
|
| 36 |
-
- Fine-tuned using
|
| 37 |
|
| 38 |
Step |Training Loss |Validation Loss
|
| 39 |
10 2.207200 1.487218
|
|
|
|
| 13 |
|
| 14 |
# SILMA-9B-Instruct Fine-Tuned for Arabic QA
|
| 15 |
|
|
|
|
| 16 |
|
| 17 |
+
[](https://huggingface.co/MohammedNasser/silma_9b_instruct_ft)[](https://opensource.org/licenses/MIT)[](https://www.python.org/downloads/release/python-390/)
|
|
|
|
|
|
|
| 18 |
|
| 19 |
This model is a fine-tuned version of [silma-ai/SILMA-9B-Instruct-v1.0](https://huggingface.co/silma-ai/SILMA-9B-Instruct-v1.0), optimized for Arabic Question Answering tasks. It excels at providing numerical answers to a wide range of questions in Arabic.
|
| 20 |
|
| 21 |
## Model Details
|
| 22 |
|
| 23 |
+
- **Model Name**: silma_9b_instruct_ft
|
| 24 |
+
- **Model Type**: Language Model
|
| 25 |
+
- **Language**: Arabic
|
| 26 |
+
- **Base Model**: silma-ai/SILMA-9B-Instruct-v1.0
|
| 27 |
+
- **Fine-Tuning Method**: PEFT with LoraConfig
|
| 28 |
- **Task**: Arabic Question Answering (Numerical Responses)
|
|
|
|
| 29 |
- **Training Data**: [Custom Arabic Reasoning QA dataset](https://huggingface.co/MohammedNasser/ARabic_Reasoning_QA)
|
| 30 |
- **Quantization**: 4-bit quantization using bitsandbytes
|
| 31 |
|
|
|
|
| 34 |
- Optimized for Arabic language understanding and generation
|
| 35 |
- Specialized in providing numerical answers to questions
|
| 36 |
- Efficient inference with 4-bit quantization
|
| 37 |
+
- Fine-tuned using PEFT with LoraConfig for parameter-efficient training
|
| 38 |
|
| 39 |
Step |Training Loss |Validation Loss
|
| 40 |
10 2.207200 1.487218
|