Update README.md
Browse files
README.md
CHANGED
|
@@ -23,29 +23,33 @@ This model is a fine-tuned version of [unsloth/Qwen2-VL-2B-Instruct](unsloth/Qwe
|
|
| 23 |
- **Languages**: Arabic
|
| 24 |
- **Tasks**: OCR (Optical Character Recognition)
|
| 25 |
|
| 26 |
-
## Evaluation
|
| 27 |
-
|
| 28 |
-
The
|
| 29 |
-
|
| 30 |
-
###
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
- **
|
| 42 |
-
- **
|
| 43 |
-
- **BLEU
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
## Performance Comparison Charts
|
| 51 |
|
|
|
|
| 23 |
- **Languages**: Arabic
|
| 24 |
- **Tasks**: OCR (Optical Character Recognition)
|
| 25 |
|
| 26 |
+
## Performance Evaluation
|
| 27 |
+
|
| 28 |
+
The model has been evaluated on standard OCR metrics, including Word Error Rate (WER), Character Error Rate (CER), and BLEU score.
|
| 29 |
+
|
| 30 |
+
### Metrics
|
| 31 |
+
|
| 32 |
+
| Model | WER ↓ | CER ↓ | BLEU ↑ |
|
| 33 |
+
|-------|-------|-------|--------|
|
| 34 |
+
| Fine-Tuned Model | 0.068 | 0.019 | 0.860 |
|
| 35 |
+
| Base Model | 1.344 | 1.191 | 0.201 |
|
| 36 |
+
| EasyOCR | 0.908 | 0.617 | 0.152 |
|
| 37 |
+
| Tesseract OCR | 0.428 | 0.226 | 0.410 |
|
| 38 |
+
|
| 39 |
+
### Key Results
|
| 40 |
+
|
| 41 |
+
- **WER:** 0.068 (93.2% word accuracy)
|
| 42 |
+
- **CER:** 0.019 (98.1% character accuracy)
|
| 43 |
+
- **BLEU:** 0.860
|
| 44 |
+
|
| 45 |
+
### Performance Comparison
|
| 46 |
+
|
| 47 |
+
The Fine-Tuned Model outperforms other solutions with:
|
| 48 |
+
- 95% reduction in WER compared to Base Model
|
| 49 |
+
- 98% reduction in CER compared to Base Model
|
| 50 |
+
- 328% improvement in BLEU score compared to Base Model
|
| 51 |
+
- 84% lower WER than Tesseract OCR
|
| 52 |
+
- 92% lower WER than EasyOCR
|
| 53 |
|
| 54 |
## Performance Comparison Charts
|
| 55 |
|