๐ง SINA Medical Reasoning LLM (Fine-Tuned by Ali Nadhir)
๐ Model Overview
SINA Medical Reasoning LLM is a fine-tuned version of Qwen2.5-7B, developed specifically for advanced medical diagnostic reasoning. By incorporating Chain of Thought (CoT) methodologies, the model is designed to perform step-by-step clinical analysis and decision-making.
Inspired by the legacy of Ibn Sina (Avicenna), this model merges classical medical reasoning principles with the capabilities of large language models.
๐ง Fine-Tuning Details Base Model: Qwen2.5-7B
Fine-tuned by: Ali Nadhir
Library: Hugging Face Transformers using Unsloth
Dataset: FreedomIntelligence/medical-o1-reasoning-SFT
Hardware: 1ร NVIDIA A100 (40GB VRAM)
Objective: Equip a non-reasoning LLM with structured medical reasoning capabilities to support accurate and explainable clinical inference.
๐ Reasoning Capabilities & Use Cases
Fine-tuning with Chain of Thought examples from a medical domain enables the model to improve performance in:
๐ฉบ Differential diagnosis and symptom analysis
๐ง Multi-step clinical reasoning and logic chaining
๐ Structured Q&A in medical consultations
๐งฉ Educational simulations and AI-assisted diagnosis
Ideal for:
Clinical AI prototypes
Healthcare research and experimentation
MedEd (Medical Education) tools
Diagnostic reasoning assistants
๐ Note: This release includes only the 4-bit quantized (4Q) version of the model, optimized for resource-constrained deployment.
- Downloads last month
- 7
We're not able to determine the quantization variants.