📜 Azhar Model v0.1 (Hybrid Sharia Research Model)

📊 Training Metrics (Performance Evaluation)

The following metrics demonstrate the model's significant improvement in processing classical Sharia texts after specialized Fine-Tuning:

Metric Before Training After Training
Training Loss 3.10 2.8490
Perplexity (PPL) 22.14 17.27

🧪 Quad-Comparison Results (جدول المقارنة الرباعية)

Arabic Version (النسخة العربية)

السؤال (Question) 1. Base 2. RAG Only 3. FT Only 4. RAG & FT Hybrid
صلاة الكسوف خلط فقهي دقيق (نقل آلي) أسلوب شرعي (خطأ حسابي) دقة كاملة + أسلوب شرعي رصين
أركان البيع مشتت وغير دقيق دقيق (التزام بالنص) أسلوب شرعي متمكن أسلوب شرعي + توثيق المصدر
تعريف القياس خلط مع سياق الحديث دقيق (تعريف مباشر) أسلوب أصولي متخصص منضبط بالتعريف الشرعي الجامع

English Version (النسخة الإنجليزية)

Query 1. Base 2. RAG Only 3. FT Only 4. RAG & FT Hybrid
Eclipse Prayer Factual Error Accurate (Robotic) Juristic Tone (Math error) Full Accuracy + Solid Juristic Tone
Sale Pillars Disorganized Accurate (Verbatim) Competent Juristic Style Juristic Style + Source Citation
Qiyas Definition Mixed Context Accurate (Direct) Specialized Usuli Style Disciplined Comprehensive Definition

📈 Visual Analytics

The training curve shows a steady convergence, confirming the model's stability during the fine-tuning process: Training Curve

🛠️ Methodology & Scientific Approach

Azhar Model v0.1 employs a Hybrid Methodology to ensure academic excellence:

  1. Juristic Tone (Fine-Tuning): Acquired through training on specialized Sharia corpora to master formal terminology.
  2. Academic Integrity (RAG): Implemented to ensure the model remains strictly grounded in verified reference texts, effectively eliminating AI hallucinations.

📝 Citation

MCs. Shamel, "Azhar Model v0.1: Integrating Fine-Tuning and RAG for Sharia Sciences," 2026.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shamilmohammedi/Azhar_Model_v0.1

Base model

Qwen/Qwen2.5-7B
Finetuned
(878)
this model